Predicted fundamental force strengths, all observable particle masses, and cosmology from a simple causal mechanism of vector boson exchange radiation, based on the existing mainstream quantum field theory

Solution to a problem with general relativity: A Yang-Mills mechanism for quantum field theory exchange-radiation dynamics, with prediction of gravitational strength, space-time curvature, Standard Model parameters for all forces and particle masses, and cosmology, including comparisons to other research and experimental tests

‘It has been said that more than 200 theories of gravitation have been put forward; but the most plausible of these have all had the defect that they lead nowhere and admit of no experimental test.’ - Sir Arthur Eddington, Space Time and Gravitation, Cambridge University Press, 1921, p64.

The Physical Relationship between General Relativity and Newtonian gravity

Newtonian gravity

Let’s begin with a look at the Newtonian gravity law F = mMG/r2, which is based on empirical evidence, not a speculative theory (remember Newton’s claim: hypotheses non fingo!). The inverse square law is based on Kepler’s empirical laws, which were obtained by Brahe’s detailed observations of motion of the planet Mars. The mass dependence was more of a guess by Newton, since he didn’t actually calculate gravitational forces (he did not know or even write the symbol for G, which arrived long after from the pen of Laplace). However, Newton’s other empirical law, F = ma, was strong evidence for a linear dependence of force on mass, and there was some evidence from the observation of the Moon’s orbit. The Moon was known to be about 250,000 miles away and to take about 30 days to orbit the earth, so it’s centripetal acceleration could be calculated from Newton’s law, a = v2/r. This could confirm Newton’s law in two ways. First, since 250,000 miles is about 60 times the radius of the Earth, the acceleration due to gravity from the Earth should, from the inverse-square law, be 602 times weaker at the Moon than it is at the Earth’s surface where it is 9.8 m/s2.

Hence it was possible to check the inverse-square law in Newton’s day. Newton also made a good guess at the average density of the earth, which indicates G fairly accurately using Galileo’s measurement of the gravitational acceleration at the Earth’s surface and - applied also to the Moon (assumed to have a similar density to the Earth) gives a very approximate justification for the assumption of Newton’s that gravitational force is directly proportional to the product of the two masses involved. Newton worked out geometrically proofs for using his law. For example, the mass of the Earth is not located in a point at its centre, but is distributed over a large three-dimensional volume. Newton proved that you can treat the entire mass of the earth as being in a small place in the centre of the Earth for the purpose of making calculations, and this proof is as clever as his demonstration that the inverse square law applies to elliptical planetary orbits (Hooke showed that it applied to circular orbits, which is much easier). Newton treated the mass of the earth as a series of uniform shells of small thickness. He proved that outside the shell, the gravitational field is identical, at any radius from the middle of the shell, to the gravitational field from an equal mass all located in a small lump in the middle. This proof also applies to the quantum gravity mechanism (below).

Cavendish produced a more accurate evaluation of G by measuring the twisting force (torsion) in a quartz fibre due to the gravitational attraction of two heavy balls of known mass located a known distance apart.

General relativity as a modification needed to include relativistic phenomena

Eventually failures in the Newtonian law became apparent. Because orbits of planets are elliptical with the sun at one focus, the planets speed up when near the sun, and this causes effects like time dilation and it also causes their mass to increase due to relativistic effects (this is significant for Mercury, which is closest to the sun and orbits fastest). Although this effect is insignificant over a single orbit, so it didn’t affect the observations of Brahe or Kepler’s laws upon which Newton’s inverse square law was based, the effect accumulates and is substantial over a period of centuries, because it the perhelion of the orbit precesses. Only part of the precession is due to relativistic effects, but it is still an important anomaly in the Newtonian scheme. Einstein and Hilbert developed general relativity to deal with such problems. Significantly, the failure of Newtonian gravity is most important for light, which is deflected by gravity twice as much when passing the sun as that predicted by Newton’s a = MG/r2.

Einstein recognised that gravitational acceleration and all other accelerations are represented by a curved worldline on a plot of distance travelled versus time. This is the curvature of spacetime; you see it as the curved line when you plot the height of a falling apple versus time.

Einstein then used tensor calculus to represent such curvatures by the Ricci curvature tensor, Rab, and he tried to equate this with the source of the accelerative field, the tensor Tab, which represents all the causes of accelerations such as mass, energy, momentum and pressure. In order to represent Newton’s gravity law a = MG/r2 with such tensor calculus, Einstein began with the assumption of a direct relationship such as Rab = Tab. This simply says that mass-energy tells is directly proportional to curvature of spacetime. However, it is false since it violates the conservation of mass-energy. To make it consistent with the experimentally confirmed conservation of mass-energy, Einstein and Hilbert in November 1915 realised that you need to subtract from Tab on the right hand side the product of half the metric tensor, gab, and the trace, T (the sum of scalar terms, across the diagonal of the matrix for Tab).

Hence

Rab = Tab

- (1/2)gabT.

[This is usually re-written in the equivalent form, Rab - (1/2)gabR = Tab.]

There is a very simple way to demonstrate some of the applications and features of general relativity. Simply ignore 15 of the 16 terms in the matrix for Tab, and concentrate on the energy density component, T00, which is a scalar (it is the first term in the diagonal for the matrix) so it exactly equal to its own trace:

T00

= T.

Hence, Rab = Tab - (1/2)gabT becomes

Rab = T00

- (1/2)gabT, and since T00 = T, we obtain

Rab = T

[1 - (1/2)gab]

The metric tensor gab = ds2/(dxadxb), and it depends on the relativistic Lorentzian metric gamma factor, (1 - v2/c2)-1/2, so in general gab falls from about 1 towards 0 as velocity increases from v = 0 to v = c.

Hence, for low speeds where, approximately, v = 0 (i.e., v << c), gab is generally close to 1 so we have a curvature of

Rab = T

[1 - (1/2)(1)] = T/2.

For high speeds where, approximately, v = c, we have gab = 0 so

Rab = T

[1 - (1/2)(0)] = T.

The curvature experienced for an identical gravity source if you are moving at the velocity of light is therefore twice the amount of curvature you get at low (non-relativistic) velocities. This is the explanation as to why a photon moving at speed c gets twice as much curvature from the sun’s gravity (i.e., it gets deflected twice as much) as Newton’s law predicts for low speeds. It is important to note that general relativity doesn’t supply the physical mechanism for this effect. It works quantitatively because is its a mathematical package which accounts accurately for the use of energy.

However, it is clear from the way that general relativity works that the source of gravity doesn’t change when such velocity-dependent effects occur. A rapidly moving object falls faster than a slowly moving one because of the difference produced in way the moving object is subject to the gravitational field, i.e., the extra deflection of light is dependent upon the Lorentz-FitzGerald contraction (the gamma factor already mentioned), which alters length (for a object moving at speed c there are no electromagnetic field lines extending along the direction of propagation whatsoever, only at right angles to the direction of propagation, i.e., transversely). This increases the amount of interaction between the electromagnetic fields of photon and the gravitational field. Clearly, in a slow moving object, half of the electromagnetic field lines (which normally point randomly in all directions from matter, apart from minor asymmetries due to magnets, etc.), will be pointing in the wrong direction to interact with gravity, and so slow moving objects only experience half the curvature that fast moving ones do, in a similar gravitational field.

Some issues with general relativity are focussed on the assumed accuracy of Newtonian gravity which is put into the theory as the low speed, weak field solution normalization. As we shall show below, this is incompatible with a Yang-Mills (Standard Model type) quantum gravity theory for reasons other than the renormalization problems usually assumed to exist. First, over very large distances in an expanding universe, the exchange of gravitons weakens gravitons because redshift reduces the frequency and thus the energy of radiation dramatically over cosmological sized distances. This eliminates curvature over such distances, explaining the lack of gravitational deceleration in supernova data. This is falsely explained by the mainstream by adding an epicycle, i.e.,

(gravitational deceleration without redshift of gravitons in general relativity) + (acceleration due to small positive cosmological constant due to some kind of dark energy) = (observed, non-decelerating, recession of supernovae)

instead of the simpler quantum gravity explanation (predicted in 1996, two years ahead of observation):

(general relativity with G falling for large distances due to redshift of exchange gravitons reducing the energy of gravitational interactions) = (observed, non-decelerating, recession of supernovae).

So there is no curvature of spacetime at extremely big distances! On small scales, too, general relativity is false, because the tensor describing the source of gravity uses an average density to smooth out the real discontinuities resulting from the quantized, discrete nature of particles which have mass! The smoothness of a curvature in general relativity is false in general on small scales due to the input assumption - required for the stress-energy tensor to work (it is a summation of continuous differential terms, not discrete terms for each fundamental particle). So on both very large and very small scales, general relativity is a fiddle. But this is not a problem when you understand the physical dynamics and know the limitations of the theory. It only becomes a problem when people take a lot of discrete fundamental particles representing a real mass causing gravity, average their masses over space to get an average density, and then calculate the curvature from the average density, getting a smooth result and claiming that this proves that curvature is really smooth on small scales. Of course it isn’t. That argument is like averaging the number of kids per household and getting 2.5, then claiming that the average proves that one third of kids are born with only half of their bodies. But there is also a problem with quantum gravity as usually believed (see the previous post, and also this comment, on Cosmic Variance blog, by Professor John Baez).

We will show how you can make checkable predictions for quantum gravity in this post. In the previous two posts,

here and here, the inclusion of gravity in the standard model was shown to require a change of the electroweak force SU(2) x U(1) to SU(2) x SU(2) where the three electroweak gauge bosons (W+, W-, and Zo) occur in both short-ranged massive versions and massless, infinite-range versions with the charged ones producing electromagnetic force and the neutral one producing gravitation, and the issues in calculating the outward force of the big bang were described. Depending on how the Higgs mechanism for mass will be modified, this SU(2) x SU(2) electro-weak-gravity may be replacable by a new version of a single SU(2). In the existing Standard Model, SU(3) x SU(2) x U(1), only one handedness of fundamental particles respond to the SU(2) weak force, so if you change the electroweak groups SU(2) x U(1) to SU(2) x SU(2) it can lead to a different way of understanding chiral symmetry and electroweak symmetry breaking. See also this earlier post, which discusses with quantum force effects as Hawking radiation emissions.)

The understanding of the correct symmetry model behind the Standard Model requires a physical understanding of what quarks are, how they arise, etc. For instance, bring 3 electrons close together and you start getting problems with the exclusion principle. But if you could somehow force a triad of such particles together, the net charge would be 3 times stronger than normal, so the vacuum shielding veil of polarized pair-production fermions will be also 3 times stronger, shielding the bare core charges 3 times more efficently. (Imagine it like 3 communities combining their separate castles into one castle with walls 3 times thicker. The walls provide 3 times as much shielding; so as long as they can all fit inside the reinforced castle, all benefit.) This means that the long range (shielded) charge from each of the three charges of the triad will be -1/3 instead of -1. Since pair-production, and polarization of electric charges cancelling out part of the electric field, are experimentally validated phenomena, this mechanism for fractional charges is real. Obviously, while it is easy to explain the downquark this way, you need a detailed knowledge of electroweak phenomena like the weak charges of quarks compared to leptons (which have chiral features) and also the strong force, to explain physically what is occurring with upquarks that have a +2/3 charge. Some interesting although highly abstract mathematical assaults on trying to understand particles have been made by Dr Peter Woit in

http://arxiv.org/abs/hep-th/0206135 which generates all the Standard Model particles using a U(2) spin representation (see also his popular non-mathematical introduction, Not Even Wrong: The Failure of String Theory and the Continuing Challenge to Unify the Laws of Physics), which can be compared to the more pictorial preon models of particles advocated by loop quantum gravity theorists like Dr Lee Smolin. Both approaches are suggesting that there is a deep simplicity, with the different quarks, leptons, bosons and neutrinos arising from a common basic entity by means of symmetry transformations or twists of braids:

You can treat the empirical Hubble recession law, v = HR, as describing a variation in velocity with respect to observable distance R, because as we look to greater distances in the universe, we’re seeing an earlier era, because of the time taken for the light to reach us. That’s spacetime: you can’t have distance without time. Because distance R = ct where c is the velocity of light and t is time, Hubble’s law can be written v = HR = Hct which clearly shows a variation of velocity as a function of time! A variation of velocity with time is called acceleration. By Newton’s 2nd law, the acceleration of matter produces force. This view of spacetime is not new:

‘The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.’ - Herman Minkowski, 1908.

To find out what the acceleration is, we remember that velocity is defined as v = dR/dt, and this rearranges to give dt = dR/v, which can be substituted into the definition of acceleration, a = dv/dt, giving a = dv/(dR/v) = v.dv/dR, into which we can insert Hubble’s empirical law v = HR, giving a = HR.d(HR)/dR = H2R.

The outward motion of matter

produces a force which for simplicity for the present (we will discuss correction factors for density variation and redshift effects below; see also this previous post) will be approximated by Newton’s 2nd law in the form

F = ma

=

[(4/3)πR3r].[dv/dt],

and since dR/dt = v = HR, it follows that dt = dR/(HR), so

F

= [(4/3)πR3r].[d(HR)/{dR/(HR)}]

=

[(4/3)πR3r].[H2R.dR/dR]

=

[(4/3)πR3r].[H2R]

=

4πR4rH2/3.

Fig. 1:

Mechanism for quantum gravity (a tiny falling test mass is located in the middle of the universe, which experiences isotropic graviton radiation - not necessarily spin 2 gravitons, preferably spin 1 gravitons which cause attraction by simply pushing things as this allows predictions as wel shall see - from all directions except that where there is an asymmetry produced by the mass which shields that radiation) . By Newton’s 3rd law the outward force of the big bang has an equal inward force, and gravity is equal to the proportion of that inward force covered by the shaded cone in this diagram:

(force of gravity) = (total inward force).(cross sectional area of shield projected out to radius R, i.e., the area of the base of the cone marked x, which is the product of the shield’s cross-sectional area and the ratio R2/r2) / (total spherical area with radius R).

Later in this post, this will be evaluated proving that the shield’s cross-sectional area is the cross-sectional are

a of the event horizon for a black hole, π(2GM/c2)2. But at present, to get the feel for the physical dynamics, we will assume this is the case without proving it. This gives

(force of gravity) = (4π

R4rH2/3).(π(2GM/c2)2R2/r2)/(4πR2)

= (4/3)π

R4rH2G2M2/(c4r2)

We can simplify this using the Hubble law because HR = c gives R/c = 1/H so

(force of gravity) = (4/3)π

rG2M2/(H2r2)

This result ignores both the density variation in spacetime (the distant, earlier universe having higher density) and the effect of redshift in reducing the energy of gravitons and weakening quantum gravity contributions from extreme distance, because the momentum of a graviton will be p = E/c and where E is reduced by redshift since E = hf.

Quantization of mass

However, it is significant qualitatively that this gives a force of gravity proportional not to M1M2 but instead to M2, because this is evidence for the quantization of mass. We are dealing with unit masses, fundamental particles. (Obviously ‘large masses’ are just composites of many fundamental particles.) M2 should only arise if the ultimate building blocks of mass (the ‘charge’ in a theory of quantum gravity) are quantized, because it shows that two units of mass are identical. This tells us about the way the mass-giving field particles, the ‘Higgs bosons’, operate. Instead of there being a cloud of an indeterminate number of Higgs bosons around a fermion giving rise to mass, what happens is that

each fermion acquires a discrete number of such mass-giving particles. (These ‘Higgs bosons’ surrounding the fermion acquire inertial and gravitational mass by interacting with the external gravitational field, which explains why mass increases with velocity but electric charge doesn’t. The core of a fermion doesn’t interact with the inertial/gravitational field, only with the massive Higgs bosons surrounding the core, which in turn do interact with the inertial/gravitational field. The core of the fermion only interacts with Standard Model forces, namely electromagnetism, weak force, and in the case of pairs or triads of closely confined fermions - quarks - the strong nuclear force. Inertial mass and gravitational mass arise from the Higgs bosons in the vacuum surrounding the fermion, and gravitons only interact with Higgs bosons, not directly with the fermions.)

This is explicable simply in terms of the vacuum polarization of matter and the renormalization of charge and mass in quantum electrodynamics, and is confirmed by an analysis of all relatively stable (half life of 10-23 second or more) known particles, as discussed in an earlier post

here (for a table of the mass predictions compared to measurements see Table 1). (Note that the simple description of polarization of the vacuum as two shells of virtual fermions, a positive one close to the electron core and a negative one further away, depicted graphically on those sites, is a simplification for convenience in depicting the net physical effect for the purpose of understanding what is going on for making accurate calculations. Obviously, in reality, all the virtual positive fermions and all the virtual negative fermions will not be located in two shells; they will be all over the place but on the average the virtual charges of like sign to the real particle core will be further away from the core than the virtual charges of unlike sign.)

Table 1:

Comparison of measured particle masses with predicted particle masses using a physical model for the renormalization of mass (both mass and electric charge are renormalized quantities in quantum electrodynamics, due to the polarization of pairs of charged virtual fermions in the electron’s strong electric field, see previous posts such as this). Anybody wanting a high quality, printable PDF version of this table can find it here. (The theory of masses here was inspired by an arXiv paper by Drs. Rivero and de Vries, and on a related topic I gather than Carl Brannen is using density operators to explain theoretically and extend the application of Yoshio Koide’s empirical formula, which states that the sum of the masses of the 3 leptons electron, muon and tau, multiplied by 1.5, is equal to the square of the sum of the square roots of the masses of those three particles. If that works it may well be compatible with this mass mechanism. Although the mechanism predicts the possible quantized masses fairly accurately as first approximations, it is good to try to understand better how the actual masses are picked out. The mechanism which produced the table produced a formula containing two integers which predicts a lot of particles which are too short-lived to occur. Why are some configurations more stable than others? What selection principle picks out the proton as being particularly stable - if not completely stable? We know that the nuclei of heavy elements aren’t chaotic bags of neutrons and protons, but have a shell structure to a considerable extent, with ‘magic numbers’ which determine relative stability, and which are physically explained by the number of nucleons taken to completely fill up successive nuclear shells. Probably some similar effect plays a part to some extent in the mass mechanism, so that some configurations have magic numbers which are stable, while nearby ones are far less stable and decay quickly. This if true of the quantized vacuum surrounding fundamental particles, would lead to a new quantum theory of such particles, with similar gimmicks explaining the original ‘anomalies’ of the periodic table, viz. isotopes explaining non-integer masses, etc.)

Particle mass predictions: the gravity mechanism implies quantized unit masses. As proved, the 1/a = 137.036… number is the electromagnetic shielding factor for any particle core charge by the surrounding polarised vacuum.

This shielding factor is obtained by working out the bare core charge (within the polarized vacuum) as follows. Heisenberg’s uncertainty principle says that the product of the uncertainties in momentum and distance is on the order h-bar. The uncertainty in momentum p = mc, while the uncertainty in distance is x = ct. Hence the product of momentum and distance, px = (mc).(ct) = Et where E is energy (Einstein’s mass-energy equivalence). Although we have had to assume mass temporarily here before getting an energy version, this is just what Professor Zee does as a simplification in trying to explain forces with mainstream quantum field theory (see

previous post). In fact this relationship, i.e., product of energy and time equalling h-bar, is widely used for the relationship between particle energy and lifetime. The maximum possible range of the particle is equal to its lifetime multiplied by its velocity, which is generally close to c in relativistic, high energy particle phenomenology. Now for the slightly clever bit:

px = h-bar

implies (when remembering p = mc, and E = mc2):

x = h-bar /p = h-bar /

(mc) = h-bar*c/E

so E = h-bar*c/x

when using the classical definition of energy as force times distance (E = Fx):

F = E/x =

(h-bar*c/x)/x

= h-bar*c/x

2.

So we get the quantum electrodynamic force between the bare cores of two fundamental unit charges, including the inverse square distance law! This can be compared directly to Coulomb’s law, which is the empirically obtained force at large distances (screened charges, not bare charges), and such a comparison tells us exactly how much shielding of the bare core charge there is by the vacuum between the IR and UV cutoffs. So we have proof that the renormalization of the bare core charge of the electron is due to shielding by a factor of

a. The bare core charge of an electron is 137.036… times the observed long-range (low energy) unit electronic charge. All of the shielding occurs within a range of just 1 fm, because by Schwinger’s calculations the electric field strength of the electron is too weak at greater distances to cause spontaneous pair production from the Dirac sea, so at greater distances there are no pairs of virtual charges in the vacuum which can polarize and so shield the electron’s charge any more.

One argument that can superficially be made against this calculation (nobody has brought this up as an objection to my knowledge, but it is worth mentioning anyway) is the assumption that the uncertainty in distance is equivalent to real distance in the classical expression that work energy is force times distance. However, since the range of the particle given, in Yukawa’s theory, by the uncertainty principle is the range over which the momentum of the particle falls to zero, it is obvious that the Heisenberg uncertainty range is equivalent to the range of distance moved which corresponds to force by E = Fx. For the particle to be stopped over the range allowed by the uncertainty principle, a corresponding force must be involved. This is more pertinent to the short range nuclear forces mediated by massive gauge bosons, obviously, than to the long range forces.

It should be noted that the Heisenberg uncertainty principle is not metaphysics but is solid causal dynamics as shown by Popper:

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303. (Note: statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum contains gauge bosons carrying momentum like light, and exerting vast pressure; this gives the foam vacuum effect at high energy where nuclear forces occur.)

Experimental evidence:

‘… we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum … amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).’ - arxiv hep-th/0510040, p 71.

In particular:

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424. (Levine and Koltick experimentally found a 7% increase in the strength of Coulomb’s/Gauss’ force field law when hitting colliding electrons at an energy of 80 GeV or so. The coupling constant for electromagnetism is 1/137 at low energies but was found to be 1/128.5 at 80 GeV or so. This rise is due to the polarised vacuum being broken through. We have to understand Maxwell’s equations in terms of the gauge boson exchange process for causing forces and the polarised vacuum shielding process for unifying forces into a unified force at very high energy. If you have one force (electromagnetism) increase, more energy is carried by virtual photons at the expense of something else, say gluons. So the strong nuclear force will lose strength as the electromagnetic force gains strength. Thus simple conservation of energy will explain and allow predictions to be made on the correct variation of force strengths mediated by different gauge bosons. When you do this properly, you learn that stringy supersymmetry first isn’t needed and second is quantitatively plain wrong. At low energies, the experimentally determined strong nuclear force coupling constant which is a measure of effective charge is alpha = 1, which is about 137 times the Coulomb law, but it falls to 0.35 at a collision energy of 2 GeV, 0.2 at 7 GeV, and 0.1 at 200 GeV or so. So the strong force falls off in strength as you get closer by higher energy collisions, while the electromagnetic force increases! Conservation of gauge boson mass-energy suggests that energy being shielded form the electromagnetic force by polarized pairs of vacuum charges is used to power the strong force, allowing quantitative predictions to be made and tested, debunking supersymmetry and existing unification pipe dreams.)

Related to this exchange radiation, are the Feynman’s path integrals of quantum field theory:

‘I like Feynman’s argument very much (although I have not thought about the virtual charges in the loops bit bit). The general idea that you start with a double slit in a mask, giving the usual interference by summing over the two paths… then drill more slits and so more paths… then just drill everything away… leaving only the slits… no mask. Great way of arriving at the path integral of QFT.’ - Prof. Clifford V. Johnson’s comment,

‘The world is not magic. The world follows patterns, obeys unbreakable rules. We never reach a point, in exploring our universe, where we reach an ineffable mystery and must give up on rational explanation; our world is comprehensible, it makes sense. I can’t imagine saying it better. There is no way of proving once and for all that the world is not magic; all we can do is point to an extraordinarily long and impressive list of formerly-mysterious things that we were ultimately able to make sense of. There’s every reason to believe that this streak of successes will continue, and no reason to believe it will end. If everyone understood this, the world would be a better place.’ – Prof. Sean Carroll,

As for the indeterminancy of electron locations in the atom, the fuzzy picture is not a result of multiple universes interacting but simply the Poincare manybody problem, whereby Newtonian physics fails when you have more than 2 bodies of similar mass or charge interacting at once (the failure is that you lose deterministic solutions to the equations, having to resort instead to statistical descriptions like the Schroedinger equation and annihilation-creation operators in quantum field theory produce many pairs of charges randomly in location and time in strong fields, deflecting particle motions chaotically on small scales, similarly to Brownian motion; this is the ‘hidden variable’ causing indeterminancy in quantum theory, not multiverses or entangled states). Entanglement is a false interpretation physically of Aspect’s (and related) experiments: Heisenberg’s uncertainty principle only applies to slower than light velocity particles like massive fermions. Aspect’s experiment stems from the Einstein-Rosen-Polansky suggestion to measure the spins of two molecules; if the correlate in a certain way then that would prove entanglement, because molecular spin are subject to the indeterminancy principle. Aspect used photons instead of molecules. Photons cannot change polarization when measured as they are frozen in nature due to their velocity, c. Therefore, the correlation of photon polarizations observed merely confirms that Heisenberg’s uncertainty principle does not apply to photons, rather than implying that (believing that Heisenberg’s uncertainty principle does apply to photons) the photons ‘must’ have an entangled polarization until measured! Aspect’s results in fact discredits entanglement.

‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’

Gravity is basically a boson shielding effect, while the errors of LeSage’s infamous pushing-gravity model are due to fermion radiation assumptions, so they did not get anywhere. Once again, gravity is a massless boson - integer spin - exchange radiation effect. LeSage (or Fatio, whose ideas LeSage borrowed), assumed that very small material particles - fermions in today’s language - were the force-causing exchange radiation. Massless bosons don’t obey the exclusion principle and they don’t interact with one another like massive bosons and all fermions (fermions do obey the exclusion principle, so they always interact with one another). Hence, LeSage’s attractive force mechanism is only valid for short-ranged particles like pions, which produce the strong nuclear attractive force between nucleons. Therefore, the ‘errors’ people found in the past when trying to use LeSage’s mechanism for gravity - the mutual interactions between the particles which equalize the force in the shadow region after a mean-free-path - don’t apply to bosonic radiation which doesn’t obey the exclusion principle. The short-range of LeSage’s gravity becomes an advantage in explaining the pion mediated strong nuclear force. LeSage - or actually Newton’s friend Fatio, whose ideas were allegedly plagarised by LeSage - made a mess of it. The LeSage attraction mechanism is predicted to have a short range on the order of a mean free path of scatter before radiation pressure equalization in the shadows quenches the attractive force. This short range is real for nuclear forces, but not for gravity or electromagnetism:

The Fatio-LeSage mechanism is useless because it makes no prediction for the strength of gravity whatsoever, and it is plain wrong because it assumes gas molecules or fermions are the exchange radiation, instead of gauge bosons. The falsehood of the Fatio-LeSage mechanism is that the gravity force range would be short ranged, since the material pressure of the fermion particles (which bounce off each other due to the Pauli exclusion principle) or gas molecules causing gravity, would get diffused into the shadows within a short distance; just as air pressure is only shielded by a solid for a distance on the order of a mean free path of the gas molecules. Hence, to get a rubber suction cup to be pushed strongly to a wall by air pressure, the wall must be smooth, and it must be pushed firmly. Such a short ranged attractive force mechanism may be useful in making pion-mediated Yukawa strong nuclear force calculations, but is not gravity.

(Some of the ancient objections to LeSage are plain wrong and in contradiction of Yang-Mills theories such as the standard model. For example, it was alleged that gravity couldn’t be the result of an exchange radiation force because the exchange radiation would heat up objects until they all glowed. This is wrong because the mechanisms by which radiation interact with matter don’t necessarily transfer that energy into heat; classically all energy is usually degraded to waste heat in the end, but the gravitational field energy cannot be directly degraded to heat. Masses don’t heat up just because they are exchanging radiation, the gravitational field energy. If you drop a mass and it hits another mass hard, substantial heat is generated, but this is an indirect effect. Basically, many of the arguments against physical mechanisms are bogus. For an object to heat up, the charged cores of the electrons must gain and radiate heat energy; but the gravitational gauge boson radiation isn’t being exchanged with the electron bare core. Instead, the fermion core of the electron has no mass, and since quantum gravity charge is mass, the lack of mass in the core of the electron means it can’t interact with gravitons. The gravitons interact with some vacuum particles like ‘Higgs bosons’, which surround the electron core and produce inertial and gravitational forces indirectly. The electron core couples to the ‘Higgs boson’ by electromagnetic field interactions, while the ‘Higgs boson’ at some distance from the electron core interacts with gravitons. This indirect transfer of force can smooth out the exchange radiation interactions, preventing that energy from being degraded into heat. So objections - if correct - would also have to debunk the Standard Model which is based on Yang-Mills exchange radiation, and which is well tested experimentally. Claiming that exchange radiation would heat things up until they glowed is similar to the Ptolemy followers claiming that if the Earth rotated daily, clouds would fly over the equator at 1000 miles/hour and people would be thrown off the ground! It’s a political-style junk objection and doesn’t hold up to any close examination in comparison to experimentally-determined scientific facts.)

When a mass-giving black hole (gravitationally trapped) Z-boson (this is the Higgs particle) with 91 GeV energy is outside an electron core, both its own field (it is similar to a photon, with equal positive and negative electric field) and the electron core have alpha shielding factors, and there are also smaller geometric corrections for spin loop orientation, so the electron mass is:

Mza2 /(1.5*2p) = 0.51 MeV

If, however, the electron core has more energy and can get so close to a trapped Z-boson that both are inside and share the same overlapping polarised vacuum veil, then the geometry changes so that the 137 shielding factor operates only once, predicting the muon mass:

Mza/(2p ) = 105.7 MeV

The muon is thus an automatic consequence of a higher energy state of the electron. As Dr Thomas Love of California State University points out, although the muon doesn’t decay directly into an electron by gamma ray emission, apart from its higher mass it is identical to an electron, and the muon can decay into an electron by emitting electron and muon neutrinos. The general equation the mass of all particles apart from the electron is:

Men(N + 1)/(2a) = 35n(N+1) Mev.

(For the electron, the extra polarised shield occurs so this should be divided by the 137 factor.) Here the symbol n is the number of core particles like quarks, sharing a common, overlapping polarised electromagnetic shield, and N is the number of Higgs or trapped Z-bosons. Lest this be dismissed as ad hoc coincidence (as occurred in criticism of Dalton’s early form of the periodic table), remember we have a physical mechanism unlike Dalton, and we below make additional predictions and tests for all the other observable particles in the universe, and compare the results to experimental measurements. There is a similarity in the physics between these vacuum corrections and the Schwinger correction to Dirac’s 1 Bohr magneton magnetic moment for the electron: corrected magnetic moment of electron = 1 + a/(2p) = 1.00116 Bohr magnetons. Notice that this correction is due to the electron interacting with the vacuum field, similar to what we are dealing with here. Also note that Schwinger’s correction is only the first (but is by far the biggest numerically and thus the most important, allowing the magnetic moment to be accurately predicted to 6 significant figures of accuracy) of an infinite series of correction terms involving higher powers of a for more complex vacuum field interactions. Each of these corrections is depicted by a different Feynman diagram. (Basically, quantum field theory is a mathematical correction for the probability of different reactions. The more classical and obvious things generally have the greatest probability by far, but stranger interactions occasionally occur in addition, so these also need to be included in calculations which give a prediction which is statistically very accurate.)

This kind of gravitational calculation also allows us to predict the gravitational coupling constant, G, as will be proved below. We know that the inward force is carried by gauge boson radiation, because all forces are due to gauge boson radiation according to the Standard Model of particle physics, which is the best-tested physical theory of all time and and has made thousands of accurately confirmed predictions from an input of just 19 empirical parameters (don’t confuse this with the bogus supersymmetric standard model, which even in its minimal form requires 125 adjustable parameters and has a large landscape of possibilities, making no definite or precise predictions whatsoever). The Standard Model is a Yang-Mills theory in which the exchange of gauge bosons between relevant charges for the force (i.e., colour charges for quantum chromodynamic forces, electric charges for electric forces, etc.) causes the force.

What happens is that Yang-Mills exchange radiation pushes inward, coming from the surrounding, expanding universe. Since spacetime, as recently observed, isn’t boundless (there’s no observable gravity retarding the recession of the most distant galaxies and supernovae, as discovered in 1998, and so there is no curvature at the greatest distances), the universe is spherical and is expanding without slowing down. The expansion is caused by the physical pressure of the gauge boson radiation. This radiation exerts momentum p = E/c. Gauge boson radiation is emitted towards us by matter which is receding: the reason is Newton’s 3rd law. Because, as proved above, the Hubble recession in spacetime is an acceleration of matter outwards, the matter receding has an outward force by Newton’s 2nd empirical law F = ma, and this outward force has an equal and opposite reaction, just like the exhaust of a rocket. The reaction force is carried by gauge boson radiation.

What, you may ask, is the mechanism behind Newton’s 3rd law in this case? Why should the outward force of the universe be accompanied by an inward reaction force? I dealt with this in a paper in May 1996, made available via the letters page of the October 1996 issue of Electronics World. Consider the source of gravity, the gravitational field (actually gauge boson radiation), to be a frictionless perfect fluid. As lumps of matter, in the form of the fundamenta particles of galaxies, accelerate away from us, they leave in their wake a volume of vacuum which was previously occupied but is now unoccupied. The gravitational field doesn’t ignore spaces which are vacated when matter moves: instead, the gravitational field fills them. How does this occur?

What happens is like the situation when a ship moves along. It doesn’t suck in water from behind it to fill its wake. Instead, water moves around from the front to the back. In fact, there is a simple physical law: there is an equal volume of water moving to the ship’s displacement moving continuously in the opposite direction to the ship’s motion.

This water fills in the void behind the moving ship. For a moving particle, the gravitational field of spacetime does the same. It moves around the particle. If it did anything else, we would see the effects of that: for example, if the gravitational field piled up in front of a moving object instead of flowing around it, the pressure would increase with time and there would be drag on the object, slowing it down. The fact that Newton’s 1st law, inertia, is empirically based tells us that the vacuum field does flow frictionlessly around moving particles instead of slowing them down. The vacuum field does however exert a net force when an object accelerates; this causes increases the mass of the object and causes a flattening of the object in the direction of motion (FitzGerald-Lorentz contraction). However, this is purely a resistance to acceleration, and there is drag to motion unless the motion is accelerative.

‘… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that "flows" … A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ - Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp89-90.

‘Popular accounts, and even astronomers, talk about expanding space. But how is it possible for space … to expand? … ‘Good question,’ says [Steven] Weinberg. ‘The answer is: space does not expand. Cosmologists sometimes talk about expanding space – but they should know better.’ [Martin] Rees agrees wholeheartedly. ‘Expanding space is a very unhelpful concept’.’ – New Scientist, 17 April 1993, pp32-3. (The volume of spacetime expands, but the fabric of spacetime, the gravitational field, flows around moving particles as the universe expands.)

Fig. 2: The general all-round pressure from the gravitational field does of course produce physical effects. The radiation is received by mass almost equally from all directions, coming from other masses in the universe; the radiation is in effect reflected back the way it came if there is symmetry that prevents the mass from being moved. The result is a compression of the mass by the amount mathematically predicted by general relativity, i.e., the radial contraction is by the small distance MG/(3c²) = 1.5 mm for the Earth; this was calculated by Feynman using general relativity in his famous Feynman Lectures on Physics. The reason why nearby, local masses shield the force-carrying radiation exchange, causing gravity, is because the distant masses in the universe is in high speed recession, but the nearby mass is not receding significantly. By Newton’s 2nd law the outward force (according of a nearby mass which is not receding (in spacetime) from you is F = ma = m.dv/dt = 0. Hence, by Newton’s 3rd law, the inward force of gauge bosons coming towards you from a local, non-receding mass is also zero; there is no action and so there is no reaction. As a result, the local mass shields you rather than exchanging gauge bosons with you, so you get pushed towards it. This is why apples fall.

Since there is very little shielding area (fundamental particle shielding cross-sectional areas are small compared to the Earth’s area) so the Earth doesn’t block all of the gauge boson radiation being exchanged between you and the masses in the receding galaxies beyond the other far side of the Earth. The shielding by the Earth is by fundamental particles in it, specifically the fundamental particles which give rise to mass (supposed to be some form of Higgs bosons which surround fermions, giving them mass) by interacting with the gravitational field of exchange radiation. Although each local fundamental particle over its shielding cross-sectional area stops the gauge boson radiation completely, most of Earth’s volume is devoid of fundamental particles because they are so small. Consequently, the Earth as a whole is an inefficient shield. There is little probability of different fundamental particles in the earth being directly behind one another (i.e., overlapping of shielded areas) because they are so small. Consequently, the gravitational effect from a large mass like the Earth is just the simple sum of the contributions from the fundamental particles which make the mass up, so the total gravity is proportional to the number of particles, which is proportional to the mass.

The point is that nearby masses, which are not receding from you significantly, don’t fire gauge boson radiation towards you, because there is no reaction force! However, they still absorb gauge bosons, so they shield you, creating an asymmetry. You get pushed towards such masses by the gauge bosons coming from the direction opposite to the mass. For example, standing on the Earth, you get pushed down by the asymmetry; the upward beam of gauge bosons coming through the earth is very slightly shielded. The shielding effect is very small, because it turns out that the effective cross-sectional shielding area of an electron (or other fundamental particle) for gravity is equal to πR2where R = 2GM/c2 which is the event horizon radius of an electron. This is a result of the calculations, as is a prediction of the Newtonian gravitational parameter G! Now let’s prove it.

Approach 1

Referring to Fig. 1 above, we can evaluate the gravity force (which is the proportion of the total force indicated by the dark-shaded cone; the observer is in the middle of the diagram at the apex of each cone). The force of gravity is not simply the total inward force, which is equal to the total outward force. Gravity is only the proportion of the total force which is represented by the dark cone.

The total force, as proved above, is = 4πR4rH2/3. The fraction of this which is represented by the dark cone is equal to the volume of the cone (XR/3, where X is the area of the end of the cone), divided by volume (4πR3/3), of the sphere of radius R (the radius of the observable spacetime universe defined by R = ct = c/H). Hence,

Force of gravity =(4πR4rH2/3).(XR/3)/(4πR3/3)

= R2rH2X/3,

where the area of the end of the cone, X, is observed in Fig. 1 to be geometrically equal to the area of the shield, A, multiplied by (R/r)2.

X = A

(R/r)2.

Hence the force of gravity is R2rH2[A(R/r)2]/3

= (1/3)R4rH2A/r2.

(Of course you get exactly the same result if you take the fraction of the total force delivered in the cone to be the area of the base of the cone, X, divided into the surface area, 4πR2, of the sphere of radius R.)

If we assume that the shield area is A = π(2GM/c2)2, i.e., the cross-sectional area of the event horizon of a black hole, then the formula above for the force of gravity, when set equal to the Newtonian law, F = mMG/r2, gives for m = M and c/R = H, the result is the prediction that

G =

(3/4)H2/(rπ).

This is of course equal to twice the false amount you get from rearranging the ‘critical density’ formula of general relativity (without a cosmological constant), but what is more interesting is that we do not need to assume that the shield area is A =

π(2GM/c2)2. The critical density formula, and other cosmological applications of general relativity, is false because it ignores the quantum gravity dynamics which become important on very large scales due to recession of masses in the universe, because the gravitational interaction is a product of the cosmological expansion; both are caused by gauge boson exchange radiation (the radiation pushes masses apart over large, cosmological distance scales, while pushing things together on small scales; this is because the uniform gauge boson pressure between masses causes them to recede from all surrounding masses and fill the expanding volume of space like raisins in an expanding cake receding from one another, where the gauge boson radiation pressure is represented by the pressure of the dough of the cake as it expands; there is no contradiction whatsoever between this effect and the local gravitational attraction which occurs when two currants are close enough that there is no dough between them and plenty of dough around them, pushing them towards one another like gravity).

We get the same result by an independent method, which does not assume that the shield area is the event horizon cross section of a black hole. Now we shall prove it.

Approach 2

As in the above approach, the outward force of the universe is 4π

R4rH2/3, and there is an equal inward force. The fraction of the inward force which is shielded is now calculated as the mass, Y, of those atoms in shaded cone in Fig. 1 which actually emit the gauge boson radiation that hits the shield, divided by the mass of the universe.

The important thing here is that Y is not simply the total mass of the universe in the shaded cone. (If it were, Y would be the density of the universe multiplied by volume of the cone.)

That total mass inside the shaded cone of Fig.1 is not important because part of the gauge boson radiation it emits misses the shield, because it hits other intervening masses in the universe. (See Fig. 3.)

The mass in the shaded cone which actually produces the gauge boson radiation which we are concerned with (that which causes gravity) is equal to the mass of the shield multiplied up geometrically by the ratio of the area of the base of the cone to the area of the shield, i.e., Y = M(R/r)2, because of the geometric convergence of the inward radiation from many masses within the cone towards the center. This is illustrated in Fig. 3.

Hence, the force of gravity is:

(4π

R4rH2/3)Y/[mass of universe]

= (4π

R4rH2/3).[M(R/r)2]/(4πR3r/3)

= R3H2m/r2.

Comparing this to Newton’s law F = mMG/r2, gives us

G = R

3H2/[mass of universe]

=

(3/4)H2/(rπ).

Fig. 3:

The mass multiplication scheme basis of Approach 2.

So we get precisely the same result as the previous method where we assumed that the shield area of an electron was the cross-sectional area of the black hole event horizon! This result for G has been produced entirely without the need for an assumption about what numerical value to take for the shielding cross-sectional area of a particle. Yet it is the same result as that derived above in the previous method when assuming that a fundamental particle has a shielding cross-sectional area for gravity-causing gauge boson radiation equal to the event horizon of a black hole. Hence, this result justifies and substantiates that assumption. We get two major quantitative results from this study of quantum gravity: a formula of G, and a formula of the cross-sectional area of a fundamental particle for gravitational interactions.

The exact formula for

G, including photon redshift and density variation

The toy model above began by assuming that the inward force carried by the gauge boson radiation is identical to the outward force represented the simple product of mass and acceleration in Newton’s 2nd law, F = ma. In fact, taking the density of the universe to be the local average around us (at a time of 14,000 million years after the big bang) is an error, because the density increases as we look back in time with increasing distance, seeing earlier epochs which have higher density. This effect tends to increase the effective outward force of the universe, by increasing the density. In fact, the effective mass would go to infinity unless there was another factor, which tends to reduce the force imparted by gravity causing gauge bosons from the greatest distances. This second effect is redshift. This problem of how to evaluate the extent to which these two effects partly offset one another is discussed in detail in the

earlier post on this blog, here. It is shown there that the effective inward force should take some more complex form, so that the inward force is no longer simply F = ma but some integral (depending on the way that the redshift is modelled, and there are several alternatives) like

r is the local density, i.e., the density of spacetime at 14,000 million years after the big bang. I have not completed the evaluation of such integrals (some of them give an infinite answer, so it is possible to rule those out as either wrong or missing some essential factor in the model). However, an earlier idea, to take account of the rise in density with increasing spacetime around us, at the same time taking account of the redshift as a divergence of the universe, is to set up a more abstract model.

Density variation with spacetime and divergence of matter in universe (causing the redshift of gauge bosons by an effect which is quantitatively similar to gauge boson radiation being ’stretched out’ over the increasing volume of space while in transit between receding masses in the expanding universe) can be modelled by the well-known equation for mass continuity (based on the conservation of mass in an expanding gas, etc):

Therefore, if this analysis is a correct abstract model for the combined effect of graviton redshift (due to the effective ’stretching’ of radiation as a result of the divergence of matter across spacetime caused by the expansion of the universe) and density variation of the universe across spacetime, our earlier result of G = (3/4)H2/(rπ) should be corrected for spacetime density variation and redshift of gauge bosons, to:

G =

(3/4)H2/(rπe3),

which is a factor of ~10 smaller than the rearranged traditional ‘critical density’ formula of general relativity, G = (3/8)H2/(

rπ). Therefore, this theory predicts gravity quantitatively and checkably, and it dispenses with the need for an enormous amount of unobserved dark matter. (There is clearly some dark matter, as neutrinos are known to have some mass, but this can be assessed from the rotation curves for spiral galaxies and other observational checks.)

Experimental confirmation for the black hole size as the cross-sectional area for fundamental particles in gravitational interactions

In additional to the theoretical evidence above, there is independent experimental evidence. If the core of an electron is gravitationally trapped Heaviside-Poynting electromagnetic energy current, it is a black hole and it has a magnetic field which is a torus (see Electronics World, April 2003).

Experimental evidence for why an electromagnetic field can produce gravity effects involves the fact that electromagnetic energy is a source of gravity (think of the stress-energy tensor on the right hand side of Einstein’s field equation). There is also the capacitor charging experiment. When you charge a capacitor, practically the entire electrical energy entering it is electromagnetic field energy (Heaviside-Poynting energy current). The amount of energy carried by electron drift is negligible, since the electrons have a kinetic energy of half the product of their mass and the square of their velocity (typically 1 mm/s for a 1 A current).

So the energy current flows into the capacitor at light speed. Take the capacitor to be simple, just two parallel conductors separated by a dielectric composed of just a vacuum (free space has a permittivity, so this works). Once the energy goes along the conductors to the far end, it reflects back. The electric field adds to that from further inflowing energy, but most of the magnetic field is cancelled out since the reflected energy has a magnetic field vector curling the opposite way to the inflowing energy. (If you have a fully charged, ’static’ conductor, it contains an equilibrium with similar energy currents flowing in all possible directions, so the magnetic field curls all cancel out, leaving only an electric field as observed.)

The important thing is that the energy keeps going at light velocity in a charged conductor: it can’t ever slow down. This is important because it proves experimentally that static electric charge is identical to trapped electromagnetic field energy. If this can be taken to the case of an electron, it tells you what the core of an electron is (obviously, there will be additional complexity from the polarization of loops of virtual fermions created in the strong field surrounding the core, which will attenuate the radial electric field from the core as well as the transverse magnetic field lines, but not the polar radial magnetic field lines).

You can prove this if you discharge any conductor x metres long which is charged to v volts with respect to ground, through a sampling oscilloscope. You get a square wave pulse which has a height of v/2 volts and a duration of 2 x/c seconds. The apparently ’static’ energy of v volts in the capacitor plate is not static at all; at any instant, half of it, at v/2 volts, is going eastward at velocity c and half is going westward at velocity c. When you discharge it from any point, the energy already by chance headed towards that point immediately begins to exit at v/2 volts, while the remainder is going the wrong way and must proceed and reflect from one end before it exits. Thus, you always get a pulse of v/2 volts which is 2 x metres long or 2 x/c seconds in duration, instead of a pulse at v volts and x metres long or x/c seconds in duration, which you would expect if the electromagnetic energy in the capacitor was static and drained out at light velocity by all flowing towards the exit.

This was investigated by

Catt, who used it to design the first crosstalk (glitch) free wafer scale integrated memory for computers, winning several prizes for it. Catt welcomed me when I wrote an article on him for the journal Electronics World, but then bizarrely refused to discuss physics with me, while he complained that he was a victim of censorship. However, Catt published his research in IEEE and IEE peer-reviewed journals. The problem was not censorship, but his refusal to get into mathematical physics far enough to sort out the electron.

Maxwell’s model is wrong. Some calculations of quantum gravity based on a simple, empirically-based model (no ad hoc hypotheses), which yields evidence (which needs to be independently checked) that the proper size of the electron is the black hole event horizon radius.

There is also the issue of a chicken-and-egg situation in QED where electric forces are mediated by exchange radiation. Here you have the gauge bosons being exchanged between charges to cause forces. The electric field lines between the charges have to therefore arise from the electric field lines of the virtual photons being continually exchanged.

How do you get an electric field to arise from neutral gauge bosons? It’s simply not possible. The error in the conventional thinking is that people incorrectly rule out the possibility that electromagnetism is mediated by charged gauge bosons. You can’t transmit charged photons one way because the magnetic self-inductance of a moving charge is infinite. However, charged gauge bosons will propagate in an exchange radiation situation, because they are travelling through one another in opposite directions, so the magnetic fields are cancelled out. It’s like a transmission line, where the infinite magnetic self-inductance of each conductor cancels out that of the other conductor, because each conductor is carrying equal currents in opposite directions.

Hence you end up with the conclusion that the electroweak sector of the Standard Model is in error: Maxwellian U(1) doesn’t describe electromagnetism properly. It seems that the correct gauge symmetry is SU(2) with three massless gauge bosons: positive and negatively charged massless bosons mediate electromagnetism and a neutral gauge boson (a photon) mediates gravitation. See Fig. 4.

Fig. 4:

The SU(2) electrogravity mechanism. Think of two flak-jacket protected soldiers firing submachine guns towards one another, while from a great distance other soldiers (who are receding from the conflict) fire bullets in at both of them. They will repel because of net outward force on them, due to successive impulses both from bullet strikes received on the sides facing one another, and from recoil as they fire bullets. The bullets hitting their backs have relatively smaller impulses since they are coming from large distances and so due to drag effects their force will be nearly spent upon arrival (analogous to the redshift of radiation emitted towards us by the bulk of the receding matter, at great distances, in our universe). That explains the electromagnetic repulsion physically. Now think of the two soldiers as comrades surrounded by a mass of armed savages, approaching from all sides. The soldiers stand back to back, shielding one another’s back, and fire their submachine guns outward at the crowd. In this situation, they attract, because of a net inward acceleration on them, pushing their backs toward towards one another, both due to the recoils of the bullets they fire, and from the strikes each receives from bullets fired in at them. When you add up the arrows in this diagram, you find that attractive forces between dissimilar unit charges have equal magnitude to repulsive forces between similar unit charges. This theory holds water!

This predicts the right strength of gravity, because the charged gauge bosons will cause the effective potential of those fields in radiation exchanges between similar charges throughout the universe (drunkard’s walk statistics) to multiply up the average potential between two charges by a factor equal to the square root of the number of charges in the universe. This is so because any straight line summation will on average encounter similar numbers of positive and negative charges as they are randomly distributed, so such a linear summation of the charges that gauge bosons are exchanged between cancels out. However, if the paths of gauge bosons exchanged between similar charges are considered, you do get a net summation. See Fig. 5.

Fig. 5:

Charged gauge bosons mechanism and how the potential adds up, predicting the relatively intense strength (large coupling constant) for electromagnetism relative to gravity according to the path-integral Yang-Mills formulation. For gravity, the gravitons (like photons) are uncharged, so there is no adding up possible. But for electromagnetism, the attractive and repulsive forces are explained by charged gauge bosons. Notice that massless charge electromagnetic radiation (i.e., charged particles going at light velocity) is forbidden in electromagnetic theory (on account of the infinite amount of self-inductance created by the uncancelled magnetic field of such radiation!) only if the radiation is going solely in only one direction, and this is not the case obviously for Yang-Mills exchange radiation, where the radiant power of the exchange radiation from charge A to charge B is the same as that from charge B to charge A (in situations of equilibrium, which quickly establish themselves). Where you have radiation going in opposite directions at the same time, the handedness of the curl of the magnetic field is such that it cancels the magnetic fields completely, preventing the self-inductance issue. Therefore, although you can never radiate a charged massless radiation beam in one direction, such beams do radiate in two directions while overlapping. This is of course what happens with the simple capacitor consisting of conductors with a vacuum dielectric: electricity enters as electromagnetic energy at light velocity and never slows down. When the charging stops, the trapped energy in the capacitor travels in all directions, in equilibrium, so magnetic fields cancel and can’t be observed. This is proved by discharging such a capacitor and measuring the output pulse with a sampling oscilloscope.

The price of the random walk statistics needed to describe such a zig-zag summation (avoiding opposite charges!) is that the net force is not approximately 1080 times the force of gravity between a single pair of charges (as it would be if you simply add up all the charges in a coherent way, like a line of aligned charged capacitors, with linearly increasing electric potential along the line), but is the square root of that multiplication factor on account of the zig-zag inefficiency of the sum, i.e., about 1040 times gravity. Hence, the fact that equal numbers of positive and negative charges are randomly distributed throughout the universe makes electromagnetism strength only 1040/1080 = 10-40 as strong as it would be if all the charges were aligned in a row like a row of charged capacitors (or batteries) in series circuit. Since there are around 1080 randomly distributed charges, electromagnetism as multiplied up by the fact that charged massless gauge bosons are Yang-Mills radiation being exchanged between all charges (including all charges of similar sign) is 1040 times gravity. You could picture this summation by the physical analogy of a lot of charged capacitor plates in space, with the vacuum as the dielectric between the plates. If the capacitor plates come with two opposite charges and are all over the place at random, the average addition of potential works out as that between one pair of charged plates multiplied by the square root of the total number of pairs of plates. This is because of the geometry of the addition. Intuitively, you may incorrectly think that the sum must be zero because on average it will cancel out. However, it isn’t, and is like the diffusive drunkard’s walk where the average distance travelled is equal to the average length of a step multiplied by the square root of the number of steps. If you average a large number of different random walks, because they will all have random net directions, the vector sum is indeed zero. But for individual drunkard’s walks, there is the factual solution that a net displacement does occur. This is the basis for diffusion. On average, gauge bosons spend as much time moving away from us as towards us while being exchanged between the charges of the universe, so the average effect of divergence is exactly cancelled by the average convergence, simplifying the calculation. This model also explains why electromagnetism is attractive between dissimilar charges and repulsive between similar charges (Fig. 5).

Experimentally checkable consequences of this gravity mechanism, and consistency with known physics

Universal gravitational parameter,

G

G =

(3/4)H2/(rπe3), derived in stages above, where e3 is the cube of the base of natural logarithms (the correction factor due to the effects of redshift and density variation in spacetime), is a quantitative prediction. In the previous post here, the best observational inputs for Hubble parameter H and local density of universe r were identified: ‘The WMAP satellite in 2003 gave the best available determination: H = 71 +/- 4 km/s/Mparsec = 2.3*10-18 s-1. Hence, if the present age of the universe is t = 1/H (as suggested from the 1998 data showing that the universe is expanding as R ~ t, i.e. no gravitational retardation, instead of the Friedmann-Robertson-Walker prediction for critical density of R ~ t2/3 where the 2/3 power is the effect of curvature/gravity in slowing down the expansion) then the age of the universe is 13,700 +/- 800 million years. … The Hubble space telescope was used to estimate the number of galaxies in a small solid area of the sky. Extrapolating this to the whole sky, we find that the universe contains approximately 1.3*1011 galaxies, and to get the density right for our present time after the big bang we use the average mass of a galaxy at the present time to work out the mass of the universe. Taking our Milky Way as the yardstick, it contains about 1011 stars, and assuming that the sun is a typical star, the mass of a star is 1.9889*1030 kg (the sun has 99.86% of the mass of the solar system). Treating the universe as a sphere of uniform density and radius R = c/H, with the above mentioned value for H we obtain a density for the universe at the present time (~13,700 million years) of about 2.8*10-27 kg/m3.’

Putting H = 2.3*10-18 s-1 and

r = 2.8*10-27 kg/m3 into G = (3/4)H2/(rπe3), gives a result of G = 2.2*10-11 m3 kg-1 s-2 which is one third of the experimentally determined value of G = 6.673*10-11 m3 kg-1 s-2. This factor of 3 error is within the error bars for the estimates of the density because of uncertainties in estimating the average mass of a galaxy. To put the accuracy of this prediction into perspective, try reading the statement by Eddington (quoted at the top of this blog post): how many other theories based entirely on observably verified facts like Hubble’s law and Newton’s laws, predict the strength of gravity? Alternatively, compare it to the classical (and incorrect) ‘critical density’ prediction from general relativity (which ignores the mechanism of gravitation), which rearranges to give a formula for G which is e3/2 or 10 times bigger, thus the critical density is 3.3 times bigger than the experimental data.

This is actually an unfair comparison, because the rough estimate for the density is about 3 times too high. Most astronomers suggest that the observable density is 5-20% of the critical density, i.e, 10% with a factor of 2 error limit. This would put the density at

r = 10-27 kg/m3 and our prediction is then exact, with a factor of 2 experimental error limit. The abundance of dark matter is not experimentally measured. There is some observational evidence for dark matter, and theoretically there are some solid reasons why there should be such matter in a dark, non luminous form (neutrinos have mass, as do black holes). The mainstream takes the critical density formula from general relativity and the measured density for luminous matter and uses the disagreement to claim that the difference is dark matter. That argument is weak, because general relativity is in error for cosmological purposes through ignoring quantum gravity effects which become important on large scales in an expanding universe (i.e., redshift of gravitons weaking the force gravity over large distances, the nature of the Yang-Mills exchange radiation dynamical mechanism for gravity in which gravity is a result of radiation exchange with the other masses in the expanding universe, etc.). Another argument for a lot of dark matter is the flattening of galactic rotation curves, Cooperstock and Tieu have explained galactic rotation ‘evidence’ for dark matter as not being due to dark matter, but a GR effect which was not taken into account by the people who originally applied Newtonian dynamics to analyse galactic rotation:

‘One might be inclined to question how this large departure from the Newtonian picture regarding galactic rotation curves could have arisen since the planetary motion problem is also a gravitationally bound system and the deviations there using general relativity are so small. The reason is that the two problems are very different: in the planetary problem, the source of gravity is the sun and the planets are treated as test particles in this field (apart from contributing minor perturbations when necessary). They respond to the field of the sun but they do not contribute to the field. By contrast, in the galaxy problem, the source of the field is the combined rotating mass of all of the freely-gravitating elements themselves that compose the galaxy.’

Professor Sean Carroll writes a lot about cosmology, and is author of a very useful book on general relativity. However, in writing about the discovery of direct evidence for dark matter on his blog post http://cosmicvariance.com/2006/08/21/dark-matter-exists/ he does perhaps cause confusion. He starts by stating without evidence that 5% of the universe is ordinary matter, 25% dark matter and 70% dark energy. He then explains that the direct evidence for dark matter proves that mainstream cosmologists are not fooling themselves. The problem is that the direct evidence for dark matter doesn’t say how much dark matter there is: it’s not quantitative. It does not allow any confirmation of the theoretical guesswork for the statement he makes that there is 5 times as much dark matter as visible matter. He does then go on to discuss whether some kind of ‘modified Newtonian dynamics,’ rather than dark matter, could resolve the problems - and he writes that he would prefer some objective resolution of that type rather than in effect inventing ‘dark matter’ epicycles as convenient fixes which cannot be readily checked even in principle, but it is all wishy-washy because he does not state a definite proposal which is concrete and solves the quantum gravity facts, such as this mechanism.

Small size of the cosmic background radiation ripples

The prediction of gravity by this mechanism appears to be accurate to within experimental data, which is accurate to within a factor of approximately two. The second major prediction of this mechanism is the small size in the sound-like ripples in angular distribution of the cosmic background radiation which is the earliest directly observable radiation in the universe, whose emitted power peaked at 370,000 years after the big bang when the temperature was 3,500 Kelvin, and redshifted or ’stretched out’ due to cosmic expansion which reduces its temperature to 2.7 Kelvin.

Because radiation and matter were in thermal equilibrium (an ionised gas) at the time the cosmic background radiation was emitted, the radiation carries an imprint of the nature of the matter at that time. The cosmic background radiation was found to be of extremely uniform temperature, far more uniform than expected at 370,000 years after the big bang, when conventional models of galaxy formation implied that should have been big ripples to indicate the ’seeding’ of lumps that could become stars and galaxies.

This is called the ‘horizon problem’ or ‘isotropy problem’, because the microwave background radiation from opposite directions in the sky is similar to within 0.01%, and in the mainstream models gravity always has the same strength and would have caused bigger non-uniformities within 370,000 years of the big bang. A mainstream attempt to solve this problem is ‘inflation’ whereby the universe expanded at a faster than light speed for a small fraction of a second after the big bang, making the density of the universe uniform all over the sky before gravity had a chance to magnify irregularities in the expansion process.

This ‘horizon problem’ is closely related to the ‘flatness problem’ which is the issue that in general relativity, the universe depending on its density has three possible geometries: open, flat, and closed. At the critical density it will be flat, with gravitation causing its radius to increase in proportion to the two-thirds power of time after the big bang. Mainstream consensus was that the universe was probably flat - which means of critical density, five to twenty times more than the observable density. The flatness problem is that if the universe was not completely flat, but of slightly different density across the universe, then the variation in density would be greatly magnified by the expansion of the universe and would be obvious today. The absense of any such large anisotropy is widely believed, by the mainstream, to be evidence for a flat geometry.

The mechanism for gravity solves these problems. It solves the flatness problem by showing that the critical density (distinguishing the open, flat, and closed solutions to the Friedmann-Robertson-Walker metric of general relativity, which is applied to cosmology) is false for ignoring quantum gravity effects: there ars no long range gravitational influences in an expanding universe because the graviton exchange radiation of quantum gravity is becomes severely redshifted like light, and cannot produce curvature effects like forces on large distances. So the whole existing mainstream structure of using general relativity to work out cosmology falls apart.

The horizon problem as to why the cosmic background is so smooth is solved by this model in an interesting way. It is very simple. The relationship giving the gravity parameter G is directly proportional to the age of the universe. The older the universe gets, the stronger gravity gets. At 370,000 years after the big bang, G was 40,000 times smaller than it is now, and at earlier times it was even smaller. The ripples in the cosmic background radiation are extremely small, because the gravitational force was so small.

As proved earlier, the Hubble acceleration is a = dv/dt = H2R = H2ct, where t is time past when the light was emitted but can be set equal to the age of the universe for our purposes here. Hence the outward force F = ma = mH2ct, is proportional to the age of the universe, as is the equal inward force according to Newton’s 3rd law of motion.

We can also see proportionality to time in the result G = (3/4)H2/(rπe3), since H2 = 1/t2 and r is mass of universe divided by volume (which is proportional to the cube of radius, i.e., the cube of the product ct), so this formula implies that G is proportional to (1/t2)/(1/t3) which is of course directly proportional to time.

Dirac did not have a mechanism for a time-dependence of G but he guessed that G might vary. Unfortunately, lacking this mechanism, Dirac guessed that G was falling with time when it is actually increasing, and he did not realise that it is not just the strength constant for gravity that varies, but all the strength coupling constants vary in the same way. This disproves Edward Teller’s claim (based on just G varying) that if it were true, the sun’s radiant power would vary with time in a way incompatible with life (e.g., he calculated that the oceans would have been literally boiling during the Cambrian era if Dirac’s assumption was true).

It also disproves another claim that G is constant based on nucleosythesis in the big bang, in the same way. The argument here is that nuclear fusion in stars and in the big bang depends on gravity to cause the basic compressive force, causing electrically charged positive particles to collide hard enough to sufficiently break through the ‘barrier’, caused by the repulsive electric Coulomb force, so that the short-ranged strong attractive force can then fuse the particles together. The big bang nucleosynthesis model correctly predicts the observed abundances of unfused hydrogen and fusion products like helium, assuming that G is constant. Because the result is correct, it is often claimed (even by students of Professor Carroll) that G must have had a value at 1 minutes after the big bang that is no more than 10% different to today’s value for G. The obvious fallacy here is that both electromagnstism and gravity vary the same way. If you double both the Coulomb force and the gravity force, the fusion rate doesn’t vary, because the Coulomb force is opposing fusion while gravity is causing fusion, and both are inverse square forces. The effect of G varying is not manifested in a change to the fusion rate in the big bang or in a star, because the corresponding change in the Coulomb force offsets it.

Louise Riofrio has investigated the dimensionally correct relationship GM = tc3 which was discussed earlier on this blog

here, here and here where M is the mass of the universe and t is its age. This is algebraically equivalent to G = (3/4)H2/(rπ), i.e, the gravity prediction without a dimensionless redshift-density correction factor of e3. It is interesting that it can be derived on the basis of energy based methods, as first pointed out by John Hunter who suggested setting E = mc2 = mMG/R, i.e, setting rest mass energy equal to gravitational potential energy.

Since the electromagnetic charge of the electron is massless bosonic energy trapped as a black hole, the gravitational potential energy would have to be equal, to keep it trapped.

This rearranges to give the equations of

Riofrio and Rabinowitz, although physically it is obviously missing some dimensionless multiplication constant because the gravitational potential energy cannot be E = mMG/R, where R is the radius of the universe. It is evident that this equation describes the gravitational potential energy which would be released if the universe were (somehow) to collapse. However, the average radial distance of the mass of the universe M will be less than the radius of the universe R. This brings up the density variation problem: gravitons and light both go at velocity c so we see them coming from times in the past when the density was greater (density is proportional to the reciprocal of the cube of the age of the universe due to expansion). So you cannot assume constant density and get a simple solution. You really also need to take account of the redshift of gravitons from the greatest distances, or the density will cause you problems due to tending towards infinity at radii approaching R. Hence, this energy-based approach to gravity is analogous to the physical mechanism described above. See also the derivation, by mathematician Dr Thomas R. Love of California State University, of Kepler’s law at http://nige.wordpress.com/2006/09/30/keplers-law-from-kinetic-energy/ which demonstrates that you can indeed treat problems generally by assuming that the rest mass energy of the spinning, otherwise static fundamental particle or the kinetic energy of the orbiting body, is being trapped by gravitation.

This leads to to a concrete basis for John Hunter’s suggestions published as a notice in the 12 July 2003 issue of New Scientist, page 17: he suggested that if E = mc2 = mMG/R, then the effective value of G depends on distance since G = Rc2/M, which is algebraically equivalent to the expression we obtained above for the gravity mechanism, and published in the article ‘Electronic Universe, Part 2′, Electronics World, April 2003 (excluding the suggested e-cube correction for density variation with distance and graviton redshift, which was published in a letter to Electronics World in 2004). Hunter’s July 2003 notice in New Scientist indicated that this solves the horizon problem of cosmology (thus not requiring the speculative mainstream extravagances of Alan Guth’s inflation theory). Hunter pointed out in his notice that his E = mc2 = mMG/R, when applied to the earth, should include another term for the influence of the nearby mass of the sun, leading to E = mc2 = mMG/R + mM’G/r where m is mass of Earth, M is mass of universe, R is radius of universe (which is inaccurate as pointed out since the average distance of the mass of the surrounding universe can hardly be the radius of the universe, but must be a smaller distance, leading to the problem of the time-variation of density and thus also the redshift of the gravitons causing gravity), M’ is the mass of the Sun, and r is the distance of the Earth from the sun. Hunter argued that since r varies and is 3.4% bigger in July than in January (when Earth is closest to the sun), this leads to a suggestion for a definite experiment to test the theory: ‘Prediction: the weight of objects on the Earth will vary by 3.3 parts in 10 billion over a year, as the Earth to Sun distance changes.’ (My only problem with this prediction is simply that it is virtually impossible to test, just like the ‘not even wrong’ Planck scale unification supersymmetry ‘prediction’. Because the Earth is constantly vibrating due to seismic effects, you can never really hope to make such accurate measurements of weight. Anyone who has tried to make measurements of masses beyond a few significant figures for quantitative chemical analysis knows how difficult such a mass measurement is: making sensitive instruments is a problem, but the increased sensitivity multiplies up background vibrations so the instrument just becomes a seismograph. However, maybe some space-based precise measurements with clever experimentalist/observationist tricks will one day be able to check this to some extent.)

Electric force constant (permittivity), Hubble parameter, etc.

The proof [above] predicts gravity accurately, with G = ¾ H2/(

pre3). Electromagnetic force (discussed above and in the April 2003 Electronics World article) in quantum field theory (QFT) is due to ‘virtual photons’ which cannot be seen except via forces produced. The mechanism is continuous radiation from spinning charges; the centripetal acceleration of a = v2/r causes the emission energy emission which is naturally in exchange equilibrium between all similar charges, like the exchange of quantum radiation at constant temperature. This exchange causes a ‘repulsion’ force between similar charges, due to recoiling apart as they exchange energy (two people firing guns at each other recoil apart). In addition, an ‘attraction’ force occurs between opposite charges that block energy exchange, and are pushed together by energy being received in other directions (shielding-type attraction). The attraction and repulsion forces are equal for similar net charges. The net inward radiation pressure that drives electromagnetism is similar to gravity, but the addition is different. The electric potential adds up with the number of charged particles, but only in a diffuse scattering type way like a drunkards walk, because straight-line additions are cancelled out by the random distribution of equal numbers of positive and negative charge. The addition only occurs between similar charges, and is cancelled out on any straight line through the universe. The correct summation is therefore statistically equal to the square root of the number of charges of either sign multiplied by the gravity force proved above.

Hence F(electromagnetism) = mMGN1/2/r2 = q1q2/(4

per2) (Coulomb’s law), where G = ¾ H2/(pre3) as proved above, and N is as a first approximation the mass of the universe (4pR3r/3= 4p(c/H)3r/3) divided by the mass of a hydrogen atom. This assumes that the universe is hydrogen. In fact it is 90% hydrogen by atomic abundance as a whole, although less near stars (only 70% of the solar system is hydrogen, due to fusion of hydrogen into helium, etc.). Another problem with this way of calculating N is that we assume the fundamental charges to be electrons and protons, when in fact protons contain two up quarks (each +2/3) and one downquark (-1/3), so there are twice as many fundamental particles. However, the quarks remain close together inside a nucleon and behave for most electromagnetic purposes as a single fundamental charge. With these approximations, the formulae above yield a prediction of the strength factor e in Coulomb’s law of:

e

= qe2e2.7…3[r/(12pme2mprotonHc3)]1/2 F/m.

Using old data as in the letter published in Electronics World some years ago which gave the G formula (

r, and rearranging also G = ¾ H2/(pre3) to yield r allows us to set both results for r equal and thus to isolate a prediction for H, which can then be substituted into G = ¾ H2/(pre3) to give a prediction for r which is independent of H:

H

= 16p2Gme2mprotonc3e2/(qe4e2.7…3) = 2.3391 x 10-18 s-1 or 72.2 km.s-1Mpc-1, so 1/H = t = 13,550 million years. This is checkable against the WMAP result that the universe is 13,700 million years old; the prediction is well within the experimental error bar.

r

= 192p3Gme4mproton2c6e4/(qe8e2.7…9) = 9.7455 x 10-28 kg/m3.

Again, these predictions of the Hubble constant and the density of the universe from the force mechanisms assume that the universe is made of hydrogen, and so are first approximations. However they clearly show the power of this mechanism-based predictive method.

Particle mass mechanism. The ‘polarized vacuum’ shell exists between IR and UV cutoffs. We can work out the shell outer radius from either using the IR cutoff energy as the collision energy to calculate the distance of closest approach in a particle scattering event (like Coulomb scattering, which predominates at low energies) or we use Schwinger’s formula for the minimum static electric field strength which is needed to cause pair-productions of fermion-antifermion pairs to pop out of the Dirac sea in the vacuum. The outer radius of the polarized vacuum around a unit charge by either calculation is on the order 1 fm. This scheme doesn’t just explain and predict masses, it also replaces supersymmetry with a proper physical, checkable prediction of what happens to Standard Model forces at extremely high energy. The following text is an extract from an earlier blog post here:

‘The pairs you get produced by an electric field above the IR cutoff corresponding to 10^18 v/m in strength, i.e., very close (<1 fm) to an electron, have direct evidence from Koltick’s experimental work on polarized vacuum shielding of core electric charge published in the PRL in 1997. Koltick et al. found that electric charge increases by 7% in 91 GeV scattering experiments, which is caused by seeing through the part of polarized vacuum shield (observable electric charge is independent of distance only at beyond 1 fm from an electron, and it increases as you get closer to the core of the electron, because you have less polarized dielectric between you and the electron core as you get closer, so less of the electron’s core field gets cancelled by the intervening dielectric).

‘There is no evidence whatsoever that gravitation produces pairs which shield gravitational charges (masses, presumably some aspect of a vacuum field such as Higgs field bosons). How can gravitational charge be renormalized? There is no mechanism for pair production whereby the pairs will become polarized in a gravitational field. For that to happen, you would first need a particle which falls the wrong way in a gravitational field, so that the pair of charges become polarized. If they are both displaced in the same direction by the field, they aren’t polarized. So for mainstream quantum gravity ideas work, you have to have some new particles which are capable of being polarized by gravity, like Well’s

‘There is no evidence for this. Actually, in quantum electrodynamics, both electric charge and mass are renormalized charges, with only the renormalization of electric charge being explained by the picture of pair production forming a vacuum dielectric which is polarized, thus shielding much of the charge and allowing the bare core charge to be much greater than the observed value. However, this is not a problem. The renormalization of mass is similar to that of electric charge, which strongly suggests that mass is coupled to an electron by the electric field, and not by the gravitational field of the electron (which is way smaller by many orders of magnitude). Therefore mass renormalization is purely due to electric charge renormalization, not a physically separate phenomena that involves quantum gravity on the basis that mass is the unit of gravitational charge in quantum gravity.

‘Finally, supersymmetry is totally flawed. What is occurring in quantum field theory seems to be physically straightforward at least regarding force unification. You just have to put conservation of energy into quantum field theory to account for where the energy of the electric field goes when it is shielded by the vacuum at small distances from the electron core (i.e., high energy physics).

‘The energy sapped from the gauge boson mediated field of electromagnetism is being used. It’s being used to create pairs of charges, which get polarized and shield the field. This simple feedback effect is obviously what makes it hard to fully comprehend the mathematical model which is quantum field theory. Although the physical processes are simple, the mathematics is complex and isn’t derived in an axiomatic way.

‘Now take the situation where you put N electrons close together, so that their cores are very nearby. What will happen is that the surrounding vacuum polarization shells of both electrons will overlap. The electric field is two or three times stronger, so pair production and vacuum polarization are N times stronger. So the shielding of the polarized vacuum is N times stronger! This means that an observer more than 1 fm away will see only the same electronic charge as that given by a single electron. Put another way, the additional charges will cause additional polarization which cancels out the additional electric field!

‘This has three remarkable consequences. First, the observer at a long distance (>1 fm) who knows from high energy scattering that there are N charges present in the core, will see only a 1 charge at low energy. Therefore, that observer will deduce an effective electric charge which is fractional, namely 1/N, for each of the particles in the core.

‘Second, the Pauli exclusion principle prevents two fermions from sharing the same quantum numbers (i.e., sharing the same space with the same properties), so when you force two or more electrons together, they are forced to change their properties (most usually at low pressure it is the quantum number for spin which changes so adjacent electrons in an atom have opposite spins relative to one another; Dirac’s theory implies a strong association of intrinsic spin and magnetic dipole moment, so the Pauli exclusion principle tends to cancel out the magnetism of electrons in most materials). If you could extend the Pauli exclusion principle, you could allow particles to acquire short-range nuclear charges under compression, and the mechanism for the acquisition of nuclear charges is the stronger electric field which produces a lot of pair production allowing vacuum particles like W and Z bosons and pions to mediate nuclear forces.

‘Third, the fractional charges seen at low energy would indicate directly how much of the electromagnetic field energy is being used up in pair production effects, and referring to Peter Woit’s discussion of weak hypercharge on page 93 of the U.K. edition of Not Even Wrong, you can see clearly why the quarks have the particular fractional charges they do. Chiral symmetry, whereby electrons and quarks exist in two forms with different handedness and different values of weak hypercharge, explains it.

‘The right handed electron has a weak hypercharge of -2. The left handed electron has a weak hypercharge of -1. The left handed downquark (with observable low energy, electric charge of -1/3) has a weak hyper charge of 1/3, while the right handed downquark has a weak hypercharge of -2/3.

‘It’s totally obvious what’s happening here. What you need to focus on is the hadron (meson or baryon), not the individual quarks. The quarks are real, but their electric charges as implied from low energy physics considerations, are totally fictitious for trying to understand an individual quark (which can’t be isolate anyway, because that takes more energy than making a pair of quarks). The shielded electromagnetic charge energy is used in weak and strong nuclear fields, and is being shared between them. It all comes from the electromagnetic field. Supersymmetry is false because at high energy where you see through the vacuum, you are going to arrive at unshielded electric charge from the core, and there will be no mechanism (pair production phenomena) at that energy, beyond the UV cutoff, to power nuclear forces. Hence, at the usually assumed so-called Standard Model unification energy, nuclear forces will drop towards zero, and electric charge will increase towards a maximum (because the electron charge is then completely unshielded, with no intervening polarized dielectric).

‘It’s easy to calculate the energy density of an electric field (Joules per cubic metre) as a function of the electric field strength. This is done when electric field energy is stored in a capacitor. In the electron, the shielding of the field by the polarized vacuum will tell you how much energy is being used by pair production processes in any shell around the electron you choose. See page 70 of

http://arxiv.org/abs/hep-th/0510040 for the formula from quantum field theory which relates the electric field strength above the IR cutoff to the collision energy. (The collision energy is easily translated into distances from the Coulomb scattering law for the closest approach of two electrons in a head on collision, although at higher energy collisions things will be more complex and you need to allow for the electric charge to increase, as discussed already, instead of using the low energy electronic charge. The assumption of perfectly elastic Coulomb scattering will also need modification leading to somewhat bigger distances than otherwise obtained, due to inelastic scatter contributions.) The point is, you can make calculations from this mechanism for the amount of energy being used to mediate the various short range forces. This allows predictions and more checks. It’s totally tied down to hard facts, anyway. If for some reason it’s wrong, it won’t be someone’s crackpot pet theory, but it will indicate a deep problem between the conservation of energy in gauge boson fields, and the vacuum pair production and polarization phenomena, so something will be learned either way.

‘To give an example from

http://nige.wordpress.com/2006/10/20/loop-quantum-gravity-representation-theory-and-particle-physics/, there is evidence that the bare core charge of the electron is about 137.036 times the shielded charge observed at all distances beyond 1 fm from an electron. Hence the amount of electric charge energy being used for pair production (loops of virtual particles) and their polarization within 1 fm from an electron core is 137.036 - 1 = 136.036 times the electric charge energy of the electron experienced at large distances. This figure is the reason why the short ranged strong nuclear force is so much stronger than electromagnetism.’

‘Quantum gravity is supposed - by the mainstream - to only affect general relativity on extremely small distance scales, ie extremely strong gravitational fields.

‘According to the uncertainty principle, for virtual particles acting as gauge boson in a quantum field theory, their energy is related to their duration of existence according to: (energy)*(time) ~ h-bar.

‘Since time = distance/c,

‘(energy)*(distance) ~ c*h-bar.

‘Hence,

‘(distance) ~ c*h-bar/(energy)

‘Very small distances therefore correspond to very big energies. Since gravitons capable of graviton-graviton interactions (photons don’t interact with one another, for comparison) are assumed to mediate quantum gravity, the quantum gravity theory in its simplest form is non-renormalizable, because at small distances the gravitons would have very great energies and be strongly interacting with one another, unlike the photon force mediators in QED, where renormalization works. So the whole problem for quantum gravity has been renormalization, assuming that gravitons do indeed cause gravity (they’re unobserved). This is where string theory goes wrong, in solving a ‘problem’ which might not even be real, by coming up with a renormalizable quantum graviton based on gravitons which they then hype as being the ‘prediction of gravity’.

‘The correct thing to do is to first ask how renormalization works in gravity. In the standard model, renormalization works because there are different charges for each force, so that the virtual charges will become polarized in a field around a real charge, affecting the latter and thus causing renormalization, ie, the modification of the observable charge as seen from great distances (low energy interactions) from that existing near the bare core of the charge at very short distances, well within the pair production range (high energy interactions).

‘The problem is that gravity has only one type of ‘charge’, mass. There’s no anti-mass, so in a gravitational field everything falls one way only, even antimatter. So you can’t get polarization of virtual charges by a gravitational field, even in principle. This is why renormalization doesn’t make sense for quantum gravity: you can’t have a different bare core (high energy) gravitational mass from the long range observable gravitational mass at low energy, because there’s no way that the vacuum can be polarized by the gravitational field to shield the core.

‘This is the essential difference between QED, which is capable of vacuum polarization and charge renormalization at high energy, and gravitation which isn’t.

‘However, in QED there is renormalization of both electric charge and the electron’s inertial mass. Since by the equivalence principle, inertial mass = gravitational mass, it seems that there really is evidence that mass is renormalizable, and the effective bare core mass is higher than that observed at low energy (great distances) by the same ratio that the bare core electric charge is higher than the screened electronic charge as measured at low energy.

‘This implies (because gravity can’t be renormalized by the effects of polarization of charges in a gravitational field) that the source of the renormalization of electric charge and of the electron’s inertial mass in QED is that the mass of an electron is external to the electron core, and is being associated to the electron core by the electric field of the core. This is why the shielding which reduces the effective electric charge as seen at large distances, also reduces the observable mass by the same factor. In other words, if there was no polarized vacuum of virtual particles shielding the electron core, the stronger electric field would give it a similarly larger inertial and gravitational mass.’

Experimental confirmation of the redshift of gauge boson radiation

All the quantum field theories of fundamental forces (the standard model) are Yang-Mills, in which forces are produced by exchange radiation.

The mainstream assumes that quantum gravity will turn out similarly. Hence, they assume that gravity is due to exchange of gravitons between masses (quantum gravity charges). In the lab, you can’t move charges apart at relativistic speeds and measure the reduction in Coulomb’s law due to the redshift of exchange radiation (photons in the case of Coulomb’s law, assuming current QED is correct), but the principle is there. Redshift of gauge boson radiation weakens its energy and reduces the coupling constant for the interaction. In effect, redshift by the Hubble law means that forces drop off faster than the inverse-square law even at low energy, the additional decrease beyond the geometric divergence of field lines (or exchange radiation divergence) coming from redshift of exchange radiation, with their energy proportional to the frequency after redshift, E = hf. This is because the momentum carried by radiation is p = E/c = hf/c. Any reduction in frequency f therefore reduces the momentum imparted by a gauge boson, and this reduces the force produced by a stream of gauge bosons.

Therefore, in the universe all forces between receding masses should, according to Yang-Mills quantum field theory (where forces are due to the exchange of gauge boson radiation between charges), suffer a bigger fall than the inverse square law. So, where the redshift of visible light radiation is substantial, the accompanying redshift of exchange radiation that causes gravitation will also be substantial; weakening long-range gravity.

When you check the facts, you see that the role of ‘cosmic acceleration’ as produced by dark energy (the cc in GR) is designed to weaken the effect of long-range gravitation, by offsetting the assumed (but fictional!) long range gravity that slows expansion down at high redshifts.

In other words, the correct explanation according to current mainstream ideas about quantum field theory is that the 1998 supernovae results, showing that distant supernovae aren’t slowing down, is due to a weakening of gravity due to the redshift and accompanying energy loss E = hf and momentum loss p = E/c of the exchange radiations causing gravity. It’s simply a quantum gravity effect due to redshifted exchange radiation weaking the gravity coupling constant G over large distances in an expanding universe.

The error of the mainstream is assuming that the data are explained by another mechanism: dark energy. Instead of taking the 1998 data to imply that GR is simply wrong over large distances because it lacks quantum gravity effects due to redshift of exchange radiation, the mainstream assumed that gravity is perfectly described in the low energy limit by GR and that the results must be explained by adding in a repulsive force due to dark energy which causes an acceleration sufficient to offset the gravitational acceleration, thereby making the model fit the data.

Nobel Laureate Phil Anderson points out:

‘… the flat universe is just not decelerating, it isn’t really accelerating …’ –

Supporting this and proving that the cosmological constant must vanish in order that electromagnetism be unified with gravitation, is Lunsford’s unification of electromagnetism and general relativity on the CERN document server at

Like my paper, Lunsford’s paper was censored off arxiv without explanation.

Lunsford had already had it published in a peer-reviewed journal prior to submitting to arxiv. It was published in the International Journal of Theoretical Physics, vol. 43 (2004) no. 1, pp.161-177. This shows that unification implies that the cc is exactly zero, no dark energy, etc.

The way the mainstream censors out the facts is to first delete them from arXiv and then claim ‘look at arxiv, there are no valid alternatives’. It’s a story of dictatorship:

‘Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments if they are inimical to Ingsoc, and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.’ - George Orwell, Nineteen Eighty Four, Chancellor Press, London, 1984, p225.

The approach above focusses on gauge boson radiation shielding. We now consider the interaction. In the intense fields near charges, pair production occurs, in which the energy of gauge boson radiation is randomly and spontaneously transformed into ‘loops’ of matter and antimatter, i.e., virtual fermions which exist for a brief period (as determined by the uncertainty principle) before colliding and annihilating back into radiation (hence the spacetime ‘loop’ where the pair production and annihilation is an endless cycle).

In this framework, we have physical material pressure from the Dirac sea of virtual fermions, not just gauge boson radiation pressure. To be precise, as stated before on this blog, the Dirac sea of virtual fermions only occurs out to a radius of about 1 fm from an electron; beyond that radius there are no virtual fermions in the vacuum because the electric field strength is below 1018 volts/metre, the Schwinger threshold for pair production. So at all distances beyond about 10-15 metre from a fundamental particle, the vacuum only contains gauge boson radiation, and contains no pairs of virtual fermions, no chaotic Dirac sea.

So what happens is that gauge boson exchange radiation powers the production of short ranged, massive spacetime loops of virtual fermions being created and annihilated (and polarized in the electric field between creation and annihilation).

Now let’s consider general relativity, which is the mathematics of gravity. Contrary to some misunderstandings, Newton never wrote down F = mMG/r2, which is due to Laplace. Newton was proud of his claim ‘hypotheses non fingo’ (I feign no hypotheses), i.e., he worked to prove and predict things without making any ad hoc assumptions or guesswork speculations. He wasn’t a string theorist, basing his guesses on non-observed gravitons (

which don’t exist) or extra-dimensions, or unobservable Planck-scale unification assumptions. The effort above in this blog post (which is being written totally afresh to replace obsolete scribbles at the current version of the page http://quantumfieldtheory.org/Proof.htm) similarly doesn’t frame any hypotheses.

It’s actually well proved geometry, well-proved Newtonian first and second law, well proved redshift (which can’t be explained by

‘tired light’ speculation, but is a known and provable effect which occurs from recession, since the Doppler effect - unlike ‘tired light’ - is experimentally confirmed to occur) and similar hard, factual evidence. As explained in the previous post, the U(1) symmetry in the standard model is wrong, but apart from that misinterpretation and associated issues with the Higgs mechanism of electroweak symmetry breaking, the standard model of particle physics is the best checked physical theory ever: forces are the result of gauge boson radiation being exchanged between charges.

http://cdsweb.cern.ch/record/706468, and it’s really annoying that I can’t update, expand and correct that paper because CERN closed that archive and now only accepts updates to papers that are on the American archive, arXiv (American spelling). I pay my taxes in Europe where they help fund CERN. I can’t complain if arXiv don’t want to publish physics or want to eradicate physics and replace it with extra-dimensional ‘not even wrong’ spin-2 gravitons. But it is disappointing that there is no competitor to arXiv run by CERN anymore. By closing down external submissions and updates to papers hosted exclusively by CERN’s document server, they have handed total control of world physics to bunch of yanks obsessed by the string religion and trying to dictate it to everyone and to stop freedom of physicists to do checkable, empirically defensible research in fundamental problems. Well done, CERN.

Professor Smolin has written some funny things about Einstein. His description in The Trouble with Physics of how he went to the Institute for Advanced Study to meet Freeman Dyson and find out what Einstein was like, was hilarious. (Dyson himself went there to meet Einstein in the late 40s but never did meet him, because the evening before his meeting he read a lot of Einstein’s recent research papers and decided they were rubbish, and skipped the meeting to avoid an embarrassing confrontation.) In an earlier article on Einstein, Smolin writes:

‘Special relativity was the result of 10 years of intellectual struggle, yet Einstein had convinced himself it was wrong within two years of publishing it. He rejected his theory, even before most physicists had come to accept it, for reasons that only he cared about. For another 10 years, as the world of physics slowly absorbed special relativity, Einstein pursued a lonely path away from it.’

This definitely isn’t what’s required by school physics teachers and string theorists, who both emphasise that special relativity is 100% correct because it’s self-consistent and has masses of experimental evidence. Their argument is that general relativity is built on special relativity, and they ignore Einstein’s own contrary statements like

‘The special theory of relativity … does not extend to non-uniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. … The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant).’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916 (italics are Einstein’s own).

Einstein does actually admit, therefore, that special relativity is wrong as stated in his earlier paper in Ann. d. Phys., vol. 17 (1905), p. 891, where he falsely claims:

‘Thence [i.e., from the SR theory which takes no account of accelerations or gravitation] we conclude that a balance-clock at the equator must go more slowly, by a very small amount, than a precisely similar clock situated at one of the poles under otherwise identical conditions.’

This is by consensus held to be the one error of special relativity, see for example

When clocks were flown around the validate ‘relativity’ they actually validated the absolute coordinate system based general relativity (the gravitational field is the reference frame). G. Builder (1958) is an article called ‘Ether and Relativity’ in the Australian Journal of Physics, v11 (1958), p279, writes:

‘… we conclude that the relative retardation of clocks … does indeed compel us to recognise the causal significance of absolute velocities.’

The famous paper on the atomic clocks being flown around the world to validate ‘relativity’ is J.C. Hafele in Science, vol. 177 (1972) pp 166-8, which cites uses ‘G. Builder (1958)’ for analysis of the atomic clock results. Hence the time-dilation validates the absolute velocities in Builder’s ether paper!

In In 1995, physicist Professor Paul Davies - who won the Templeton Prize for religion (I think it was $1,000,000), wrote on pp. 54-57 of his book About Time:

‘Whenever I read dissenting views of time, I cannot help thinking of Herbert Dingle… who wrote … Relativity for All, published in 1922. He became Professor … at University College London… In his later years, Dingle began seriously to doubt Einstein’s concept … Dingle … wrote papers for journals pointing out Einstein’s errors and had them rejected … In October 1971, J.C. Hafele [used atomic clocks to defend Einstein] … You can’t get much closer to Dingle’s ‘everyday’ language than that.’

Dingle wrote in the Introduction to his book Science at the Crossroads, Martin Brian & O’Keefe, London, 1972, c2:

‘… you have two exactly similar clocks … one is moving … they must work at different rates … But the [SR] theory also requires that you cannot distinguish which clock … moves. The question therefore arises … which clock works the more slowly?’

This question really kills special relativity and makes you accept that general relativity is essential, even for clocks in uniform motion. I don’t think Dingle wrote the question very well. He should have asked clearly how anyone is supposed to determine which clock is moving, in order to calculate the time-dilation.

If there is no absolute motion, you can’t determine which clock runs the more slowly. In chapter 2 of Science at the Crossroads, Dingle discusses Einstein’s error in calculating time-dilation with special relativity in 1905 and comments:

‘Applied to this example, the question is: what entitled Einstein to conclude from his theory that the equatorial, and not the polar, clock worked more slowly?’

Einstein admitted even in popular books that wherever you have a gravitational field, velocities depend upon absolute coordinate systems:

‘But … the general theory of relativity cannot retain this [SR] law. On the contrary, we arrived at the result according to this latter theory, the velocity of light must always depend on the coordinates when a gravitational field is present.’ - Albert Einstein, Relativity, The Special and General Theory, Henry Holt and Co., 1920, p111.

The real brilliance of Einstein is that he corrected his own ideas when they were too speculative (e.g. his ‘biggest blunder’, the large positive CC to cancel out gravity at the mean intergalactic distance, keeping the universe from expanding). What a contrast to string theory.

A little more about the ‘absolute’ reference frame provided by the real existence of the gravitational field:

The clocks time dilation experiment is experimental proof of absolute not relative motion, because you have to use an absolute reference frame to determine which clock is moving and which is not moving.

In Yang-Mills (exchange radiation) quantum gravity, all masses are exchanging some sort of gravitons (spin 1 as far as physicists are concerned as proved in this post; spin 2 as far as the 10/11 dimensional 10^500 universes of string ‘theorists’ are concerned).

This means that the average locations of all the masses in the universe gives us the absolute reference frame.

Here we get into the obvious issue as to whether space is boundless or not. Until 1998, it was believed without observational evidence that space was boundless, i.e., that gravitational (the curvature causing gravitation) extends across all distance scales. Because of this, geodesics (the lines in space along which photons or small pieces of matter travel in spacetime) would be curved even on the largest scales, so all lines would curve back to their point of origin. This would allegedly mean that space is boundless, so that every person - no matter where they are in the universe - would see the same isotropic universe surrounding them.

However there are problems with this idea. Firstly, the universe isn’t isotropic because the thing we see coming from the greatest distance, the cosmic background radiation emitted 400,000 years after the big bang (thus coming from about 13,300 million light years away) is certainly not isotropic:

‘U-2 observations have revealed anisotropy in the 3 K blackbody radiation which bathes the universe. The radiation is a few millidegrees hotter in the direction of Leo, and cooler in the direction of Aquarius. The spread around the mean describes a cosine curve. Such observations have far reaching implications for both the history of the early universe and in predictions of its future development. Based on the measurements of anisotropy, the entire Milky Way is calculated to move through the intergalactic medium at approximately 600 km/s.’

However, the main problem with the old idea of boundless space and the idea it implied that the universe should be isotropic around every observer wherever they are in the universe (a pretty unscientific claim even if it doesn’t disagree with observations made from here on Earth, since nobody has actually been everywhere in the universe to observer whether it looks isotropic from other places or not, and the difficulties of travelling to distant galaxies make it a ‘not even wrong’ piece of speculative guesswork which is not checkable) is that there is no actual gravity on the greatest distance scales.

This arises because of the redshift of gravitons exchanged between masses. On the greatest distance scales, the redshift is greatest, so the gravitons have little energy E = hf and thus little momentum p = E/c = hf/c, so they can’t cause gravitational effects over such long distances! Hence, there can’t be any distant curvature of spacetime. Go a long way, and quantum gravity tells you that instead of you travelling along a closed geodesic circle which will return you sooner or later to the place you started from (which is what general relativity falsely predicts, since it ignores graviton redshift over long distances in an expanding universe), gravitational effects from curvature will diminish because the exchange of gravitons with the matter of the universe will become trivial.

This was observationally confirmed by Perlmutter’s supernova redshift observations in 1998, which showed a lack of gravitational slowing of the most distant masses that can be observed!

Sadly, instead of acknowledging that this is evidence for quantum gravity, the mainstream in astronomy tried to resurrect a disproved old idea called the cosmological constant to provide a repulsive force at long distances whose strength they adjust (by varying the assumed amount of unobservable ‘dark energy’ powering the assumed repulsion force) to exactly cancel out the attractive gravity force over those very long distances.

This cosmological constant is a false idea which goes back to Einstein in 1917, who thought the universe was static and used a massive positive cosmological constant to cancel out gravity over a distance equal to the average distances between galaxies. He believed that this would make the universe stable by preventing galaxies from being attracted to one another. However, he was wrong because the role of the cosmological constant for that purpose would make the universe unstable like a pin balanced upright, standing on its point (any slight variation of an inter-galactic distance from the average value would set off the collapse of the universe!).

The resurrection of the cosmological constant (lambda) is similar to the original Einstein cosmological constant idea. The new cosmological constant is also positive in sign like Einstein’s 1917 ‘biggest blunder’, but it differs in quantity: it is very small compared to Einstein’s 1917 value. Because it is so much smaller, the repulsive force it predicts (which increases with distance) only becomes significant in comparison to gravitational attraction when at much greater distances, where the gravity is weaker.

There are several reasons why the new small positive cosmological constant is a fiddle. First, Lunsford using a unification of electromagnetism and gravitation in which there are 6 effective dimensions (3 expanding time dimensions which describe the expanding spacetime universe, and 3 contractable spatial dimensions which describe the contractable matter in the universe which gets squeezed by radiation pressure due to the gravitational field and motion in the gravitational field) proves that the cosmological constant is zero:

Although naively, you might expect that a small positive cosmological constant, by cancelling out gravity at great distances, does the same thing as graviton redshift, it does not have the same quantitative features.

Graviton redshift cancels out gravity (i.e., gravitational retardation of distant receding matter) at great distances, but doesn’t cause repulsion at still greater distances. A small positive cosmological constant will at a particular distance do the same as graviton redshift (cancelling gravity), but at greater distances than that it differs from quantum gravity since it has a net repulsive force. Graviton redshift cancels gravity at all great distances without ever causing repulsion, unlike a positive cosmological constant (regardless of the size of the cosmological constant, which just determines the quantitative distance beyond which the net force is repulsive).

Professor Phil Anderson points out that the data don’t require anything more than a cancelling of gravity at great distances:

‘… the flat universe is just not decelerating, it isn’t really accelerating …’ –

I want to add some comments about the exact role of Loop Quantum Gravity (LQG) and also about the Zen-like interpretation of Feynman’s path integrals:

‘I like Feynman’s argument very much (although I have not thought about the virtual charges in the loops bit bit). The general idea that you start with a double slit in a mask, giving the usual interference by summing over the two paths… then drill more slits and so more paths… then just drill everything away… leaving only the slits… no mask. Great way of arriving at the path integral of QFT.’

- Prof. Clifford V. Johnson’s comment

‘Light … "smells" the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’

- Feynman, QED, Penguin, 1990, page 54.

‘The world is not magic. The world follows patterns, obeys unbreakable rules. We never reach a point, in exploring our universe, where we reach an ineffable mystery and must give up on rational explanation; our world is comprehensible, it makes sense. I can’t imagine saying it better. There is no way of proving once and for all that the world is not magic; all we can do is point to an extraordinarily long and impressive list of formerly-mysterious things that we were ultimately able to make sense of. There’s every reason to believe that this streak of successes will continue, and no reason to believe it will end. If everyone understood this, the world would be a better place.’

– Prof. Sean Carroll’s blog post on laws

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’

- R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

I think there is a big similarity in the underlying assumption, in this explanation of path integrals, to the underlying assumption of LQG.

It’s clear that path integrals apply to different sorts of particles - photons, electrons, etc. - which behave to give the well known interference effects when the double-slit experiment is done. However, physically the mechanism of what is behind the success of path integrals may vary.

One big question with bosonic radiation like photons is how it can scatter in the vacuum below the Schwinger threshold electric field strength for pair-production of fermions: photons interact with fermions, not with other photons (i.e., photons don’t obey the Pauli exclusion principle).

In other words, the diffraction of light in the double slit experiment is due to the presence of fermions (electrons) in the edges of the slit material. Take away the physical material of the mask with its slits, and the photons will be unable to diffract at all. So the path integral or sum over histories cannot be interpreted correctly by making endless holes in the mask until the mask completely disappears.

However, this argument ignores the presence of charged radiation in the vacuum which mediates all electromagnetic interactions, as described in this blog post. So we get the question arising: does the path integral (sum over histories) arise because photons and other particles interact with charged gauge boson exchange radiation present throughout the zero-point field of the vacuum?

The answer seems to be yes. This is exactly where the formulation of LQG comes into the argument. Because the spin 1 gravitons (not the widely assumed spin-2 ones, see http://nige.wordpress.com/2007/05/19/sheldon-glashow-on-su2-as-a-gauge-group-for-unifying-electromagnetism-and-weak-interactions/ ) used to make the checkable predictions in this post don’t interact with one another (photons don’t interact with one another), LQG which is based on spin-2 gravitons does directly not apply to gravity. But it will be useful for electromagnetic forces where the gauge bosons are charged. What I like about the LQG framework is that it is applying well-validated (the double slit experiment is well checked) path integral concepts to model exchange radiation in the vacuum: as Smolin’s Perimeter Institute lectures explain clearly, the path integral to determine fundamental forces is the sum over the interaction graphs of what the gauge bosons are doing in the vacuum. This path integral naturally gives a relationship between the cause of the field and the acceleration produced (curvature of spacetime, or force effect) which is similar to Einstein’s general relativity.

The key background experimental fact to the nature of electromagnetic force gauge bosons being charged (rather than photons) is the nature of the logic step and the theoretical modifications it necessitates to the traditional role of Maxwell’s displacement current:

"… What actually happens in the sloping part of the real logic step is that electrons are accelerated in non-zero time, and in so doing radiate energy like a radio transmitter antenna. Because the current variation in each conductor [you need 2 conductors to propagate a logic step, each carrying an equal and opposite current] is an exact inversion of that in the other, the fields from the radio waves each transmits is capable of exactly cancelling the fields from the signal from the opposite conductor. Seen from a large distance, therefore, there is no radio transmission of energy whatsoever. But at short distances, between the conductors there is exchange of radio wave energy between conductors in the rising portion of the step. This exchange powers the logic step … That’s why the speed of a logic pulse is the speed of light for the insulator between and around the conductors. …"

It’s an interesting post about the Planck units. Actually, the Planck units are very useful to befuddled lecturers who confuse fact with orthodoxy.

The Planck scale is purely the result of dimensional analysis, and Planck’s claim that the Planck length was the smallest length of physical significance is vacuous because the black hole event horizon radius for the electron mass, R = 2GM/c^2 = 1.35*10^{-57} m, which is over 22 orders of magnitude smaller than the Planck length, R = square root (h bar * G/c^3) = 1.6*10^{-35} m.

Why, physically, should this Planck scale formula hold, other than the fact that it has the correct units (length)? Far more natural to use R = 2GM/c^2 for the ultimate small distance unit, where M is electron mass. If there is a natural ultimate ‘grain size’ to the vacuum to explain, as Wilson did in his studies of renormalization, in a simple way why there are no infinite momenta problems with pair-production/annihilation loops beyond the UV cutoff (i.e. smaller distances than the grain size of the ‘Dirac sea’), it might make more physical sense to use the event horizon radius of a black hole of fundamental particle mass, than to use the Planck length.

All the Planck scale has to defend it is a century of obfuscating orthodoxy.

Comment by nc — May 21, 2007 @ 2:19 pm

Non-submitted comment (saved here as it is a useful brief summary of some important points of evidence; I didn’t submit the comment to the following site in the end because it covers a lot of ground and doesn’t include vital mathematical and other backup evidence):

This is interesting. A connection I know between a toroidal shape and a black hole is that, if the core of an electron is gravitationally trapped Heaviside-Poynting electromagnetic energy current, it is a black hole and it has a magnetic field which is a torus.

Experimental evidence for why an electromagnetic field can produce gravity effects involves the fact that electromagnetic energy is a source of gravity (think of the stress-energy tensor on the right hand side of Einstein’s field equation). There is also the capacitor charging experiment. When you charge a capacitor, practically the entire electrical energy entering it is electromagnetic field energy (Heaviside-Poynting energy current). The amount of energy carried by electron drift is negligible, since the electrons have a kinetic energy of half the product of their mass and the square of their velocity (typically 1 mm/s for a 1 A current).

So the energy current flows into the capacitor at light speed. Take the capacitor to be simple, just two parallel conductors separated by a dielectric composed of just a vacuum (free space has a permittivity, so this works). Once the energy goes along the conductors to the far end, it reflects back. The electric field adds to that from further inflowing energy, but most of the magnetic field is cancelled out since the reflected energy has a magnetic field vector curling the opposite way to the inflowing energy. (If you have a fully charged, ’static’ conductor, it contains an equilibrium with similar energy currents flowing in all possible directions, so the magnetic field curls all cancel out, leaving only an electric field as observed.)

The important thing is that the energy keeps going at light velocity in a charged conductor: it can’t ever slow down. This is important because it proves experimentally that static electric charge is identical to trapped electromagnetic field energy. If this can be taken to the case of an electron, it tells you what the core of an electron is (obviously, there will be additional complexity from the polarization of loops of virtual fermions created in the strong field surrounding the core, which will attenuate the radial electric field from the core as well as the transverse magnetic field lines, but not the polar radial magnetic field lines).

You can prove this if you discharge any conductor x metres long which is charged to v volts with respect to ground, through a sampling oscilloscope. You get a square wave pulse which has a height of v/2 volts and a duration of 2x/c seconds. The apparently ’static’ energy of v volts in the capacitor plate is not static at all; at any instant, half of it, at v/2 volts, is going eastward at velocity c and half is going westward at velocity c. When you discharge it from any point, the energy already by chance headed towards that point immediately begins to exit at v/2 volts, while the remainder is going the wrong way and must proceed and reflect from one end before it exits. Thus, you always get a pulse of v/2 volts which is 2x metres long or 2x/c seconds in duration, instead of a pulse at v volts and x metres long or x/c seconds in duration, which you would expect if the electromagnetic energy in the capacitor was static and drained out at light velocity by all flowing towards the exit.

This was investigated by Catt, who used it to design the first crosstalk (glitch) free wafer scale integrated memory for computers, winning several prizes for it. Catt welcomed me when I wrote an article on him for the journal Electronics World, but then bizarrely refused to discuss physics with me, while he complained that he was a victim of censorship. However, Catt published his research in IEEE and IEE peer-reviewed journals. The problem was not censorship, but his refusal to get into mathematical physics far enough to sort out the electron.

Some calculations of quantum gravity based on a simple, empirically-based model (no ad hoc hypotheses), which yields evidence (which needs to be independently checked) that the proper size of the electron is the black hole event horizon radius.

There is also the issue of a chicken-and-egg situation in QED where electric forces are mediated by exchange radiation. Here you have the gauge bosons being exchanged between charges to cause forces. The electric field lines between the charges have to therefore arise from the electric field lines of the virtual photons being continually exchanged.

How do you get an electric field to arise from neutral gauge bosons? It’s simply not possible. The error in the conventional thinking is that people incorrectly rule out the possibility that electromagnetism is mediated by charged gauge bosons. You can’t transmit charged photons one way because the magnetic self-inductance of a moving charge is infinite. However, charged gauge bosons will propagate in an exchange radiation situation, because they are travelling through one another in opposite directions, so the magnetic fields are cancelled out. It’s like a transmission line, where the infinite magnetic self-inductance of each conductor cancels out that of the other conductor, because each conductor is carrying equal currents in opposite directions.

Hence you end up with the conclusion that the electroweak sector of the SM is in error: Maxwellian U(1) doesn’t describe electromagnetism properly. It seems that the correct gauge symmetry is SU(2) with three massless gauge bosons: positive and negatively charged massless bosons mediate electromagnetism and a neutral gauge boson (a photon) mediates gravitation. This predicts the right strength of gravity, because the charged gauge bosons will cause the effective potential of those fields in radiation exchanges between similar charges throughout the universe (drunkard’s walk statistics) to multiply up the average potential between two charges by a factor equal to the square root of the number of charges in the universe. Since there are around 10^80 charges, electromagnetism is 10^40 times gravity. On average, gauge bosons spend as much time moving away from us as towards us while being exchanged between the charges of the universe, so the average effect of divergence is exactly cancelled by the average convergence, simplifying the calculation. This model also explains why electromagnetism is attractive between dissimilar charges and repulsive between similar charges.

Peter claims: " the equations one has to solve in order to find the "vacuum state" corresponding to our world are at least as complicated if not more so than the ones that define the SM"

So what? Are these equations required to be simpler? Is such a simplification always the case when a large, deeper, and more comprehensive physics theory includes an earlier theory of much more limited scope? - gina

More complex mathematical models for the physical world are justified if there is a pay-back in terms of solid predictions which can validate the need for the additional complexity. The nemesis of physics is the endlessly complex theory which makes no falsifiable predictions and is proudly defended for being incomplete.

String theory multiplies entities without necessity. Where is the necessity for anything in string? If there is no falsifiable prediction, there is no necessity.

Don’t get me wrong: I’m all for complex theories when there is a pay-back for the additional complexity. In certain ways, Kepler’s elliptical orbits were more ‘complex’ than the circular orbits of both Ptolemy and Copernicus, oxidation is more complex than phlogiston, and caloric was replaced by two theories to explain heat: kinetic theory of gases, and radiation.

These increases in complexity didn’t violate Ockham’s razor because they were needed. Maxwell’s aether violated Ockham’s razor because it required moving matter to be contracted in the direction of motion (FitzGerald contraction) in order for the Michelson-Morley experiment to be explained. This was an ad hoc adjustment, and aether had to be abandoned becaus it did not make falsifiable predictions. Notice that Aether was the leading mathematical physics theory of its heyday, circa 1865 when Maxwell’s equations based on the theory were published.

String theory has not even led to anything predictive like Maxwell’s equations. String theorists should please just try to understand that, until they get a piece of solid evidence that the world is stringy, they should stop publishing totally speculative papers which saturate the journals with one form of speculative, uncheckable orthodoxy which makes it impossible for others to get checkable ideas published and on to arxiv. (Example: Sent: 02/01/03 17:47 Subject: Your_manuscript LZ8276 Cook {gravity unification proof} Physical Review Letters does not, in general, publish papers on alternatives to currently accepted theories…. Yours sincerely, Stanley G. Brown, Editor, Physical Review Letters.)

The Greek symbols used in this and other posts and sites will not display properly on computers which don’t have Greek symbol fonts installed. Hence on such a computer you will get r appear in place of the Greek symbol for Rho, and you will get p appear in place of the symbol for Pi, etc.

The problem here is that I use r a lot for radius and p a lot for momentum, so the maths will not be readable on such computers. One solution is to produce PDF files of the pages, since PDF documents contain the symbol set used in the pages within the file itself. All people have to get is the free PDF reader from Adobe,

I’ve now finished my blogging activities (which scientifically has been fairly fruitful, although the results are messy long essays), and will devote future spare time not to blogging but to writing up the content to go on the site

It is too hard writing long mathematical blog posts on either blogspot or wordpress without sophisticated mathematical software and lots of time. What is on this blog is enough to be going on with. I think it is a disaster to try writing anything on a computer, because you end up always typing at the speed you are thinking so the results are inclined to be a stream of ideas. It’s vital at this stage to write the papers on paper using a pen, which will permit better editing. You can’t edit efficiently on a computer because there is just too much material you don’t want to cut out. The only way to get something well written and concise is to write it by hand, correct it on paper, and then revise again it while typing it into a computer. Better still, use the computer as a typewriter and don’t save files - just print them out. That forces you to retype the whole thing from beginning to end, and while you do so, you naturally condense that material which is the least concise to save retyping it. It’s expensive in terms of time, but it’s probably the only way to get a lot of detailed editing done efficiently on a complex topic like this.

Just a final note about probability theory abuse in modern physics, and how this can lead to experimental tests.

As quoted in this post, Heisenberg’s uncertainty principle and the associated mechanism for Schroedinger’s equation in quantum mechanics, is due to scattering of multiple electrons by other as they orbit.

Schroedinger’s equation can only be solved exactly for single electron atoms, like hydrogen isotopes (hydrogen, deuterium, tritium). For all heavier atoms, the solutions require approximations to be made, and are inexact.

The claim here is that if you picture the atom like the solar system, it would look like a Schroedinger atom, with chaotic orbits instead of classical elliptical orbits. The reason the solar system doesn’t have chaotic (Schroedinger) orbits is mainly due to the fact that the sun has 99.8% of the mass of the solar system, and the mass is the gravitational charge holding the solar system together. Hence the planets don’t affect the orbits of each other much, because they are relatively light and far apart. By far the main source of gravity they experience is due to the sun’s mass.

If you made an atom like the solar system, the electric charges of the electrons would need to be far smaller than they are while keeping the nuclear electric charge high. In such a case, the electrons would interfere with one another less, so the orbits would become more classical.

However, it is likely that the pair-production mechanism (spontaneous appearance and annihilation of pairs of virtual positrons and electrons out as far as 1 fm from an electron core) causes random, spontaneous deflections to the motion of an electron on a small scale, such as in an atom. This mechanism would explain why the hydrogen atom’s electron doesn’t have a classical orbit.

The problem with the Schroedinger equation is that it implies that there is some chance of finding the electron at any distance from the nucleus; the peak probability corresponds to the classical Bohr atom radius, but there is some chance of finding the electron at arbitrarily greater distances.

This is probably unphysical, as there is probably a maximum range on the electron determined by physical considerations. The electron loses its radial kinetic energy as it moves further from the nucleus, due to the deceleration effect produced by the Coulomb attraction. At some distance, the electron’s outward directed radial velocity will fall to zero, and it will then stop receding from the nucleus and start falling back. This physical model doesn’t contradict Schrodinger’s model as a first-approximation, it just supplements it with a physical limitation due to conservation of energy.

Similarly, sometimes you hear crackpot physics lecturers claiming that there is a definite small probability of all the air in a room happening to be in one corner. That actually isn’t possible unless the dimensions of the room are on the order of the mean free path of the air molecules. The reason is that pressure fluctuations physically result from collisions of air molecules, which disperse the kinetic energy isotropically, instead of concentrating it in a particular place. In order to have all the air in a room located in one corner, you have to explain the intermediate steps required to achieve that result: before all the air gets into one corner, there must be a period where the air pressure is increasing in the corner and decreasing elsewhere. As the air pressure in the corner begins to increase, the air in the corner will expand in causal consequence, and the pressure will fall. Hence, it is impossible to get all the air in a room in a corner by random fluctuations of pressure: the probability is exactly zero, rather than being low but finite. (Unless the room is very small and contains so few air molecules that there are negligible interactions between them to dissipate pressure isotropically.)

Similarly, a butterfly flapping its wing has zero propability of triggering off a hurricane because of the stability of the atmosphere to small scale fluctuations. Hurricanes are triggered by the large scale deflection due to the Coriolis effect (Earth’s rotation) of rising warm moist air which has been evaporated from a large area of warm ocean (surface temperature above 27 C). It’s not triggered by small scale irregularities.

Small scale irregularities and random chances can be multiplied up into massive effects, but only if there is instability to begin with. For example, on a level surface a nothing can generate a landslide. But on a steeply sloping surface covered by loose rocks which have been loosened from the surface by weathering, the system may become increasing unstable until the rocks over a wide area of the sloping surface may need only a minor trigger to set off an avalanche.

The same effect occurs when too much snow lands on steep mountain slopes. Another example of an unstable situation is trying to balance a pyramid on its apex. If it is balanced like that temporarily, it is highly unstable and the slightest impulse (even from a butterfly landing on it) would trigger off a much bigger event.

You can see why the physics in this comment - which is obvious - is officially ignored by mainstream physicists. They’re not completely stupid, but they believe in selling hoaxes and falsehoods which sound romantic to them and the gullible, naive fools who buy those claims. At the end of the day, modern physics is in a dilemma.

It is immoral to knowingly sell modern physics packaged in the usual extra-dimensional, stringy magic multiverse, in which anything is possible to some degree of probability.

The sociology of group-think in physics:

‘(1). The idea is nonsense.

(2). Somebody thought of it before you did.

(3). We believed it all the time.’

- Professor R.A. Lyttleton’s summary of inexcusable censorship (quoted by Sir Fred Hoyle in his autobiography, Home is Where the Wind Blows, Oxford University Press, 1997, p154).

‘… the innovator has for enemies all those who have done well under the old conditions, and lukewarm defenders in those who may do well under the new. This coolness arises partly from fear of the opponents, who have the laws on their side, and partly from the incredulity of men, who do not readily believe in new things until they have had a long experience of them. Thus it happens that whenever those who are hostile have the opportunity to attack they do it like partisans, whilst the others defend lukewarmly…’

- Nicolo Machiavelli, The Prince, Chapter VI:

Concerning New Principalities Which Are Acquired By One’s Own Arms And Ability,

Just found an interesting brief independent comment about gravity mechanism (I’ve omitted the sections which are either 100% wrong or trivial, and have interpolated one amplification in square brackets). Notice, however, that critical comments which follow below it on the physicsmathforums page link by ‘Epsilon=One’ are largely rubbish (example of rubbish: ‘Gravity travels at many, many times the SOL; otherwise the Cosmos would have the motions of the balls on a pool table. Gravity’s universal "entanglement" requires speeds that are near infinite’):

… La Sage … proposed that gravity was caused by shielding of universal flux surrounding bodies in close proximity. …

The first principle of the theory is really that no force can be a made to pull objects, it must always be a push force, so gravity is actually caused by pushing of the surround space, rather than a planet emitting "pulling particles" that eminate outwards from the planet.

The space that pushes objects towards each other is the complete surrounding space around a locality (where the two objects are in proximity), so it is a very large sphere with its radius reaching to the light wall (where objects are traveling at the speed of light relative to the center of this sphere therefore are red shifted out of view, and influence on the center). …

The push concept also helps to explain why there is no [spin-2, attractive] graviton. It is actually the lack of light, or shadow that causes the effect. Moreover, it explains why gravity travels at the speed of light.

Another thing that the theory seems to expalin is why there is a universal expansion. At some point, roughly on a galactic scale, there is not enough shielding to cause attraction, rather the flux becomes just a force of expansion.

What’s also remarkable is that even though the theory is based on the geometry of shielding of background flux, the equations (though not based on squares) still yield a function that is almost exactly in the form of the inverse squared distance, as described by Newton.

… if you think about it, it is often hard to change something from the inside, such as with politics. It’s also an advantage not to have any preconceived ideas. …

… I don’t think anybody has expressed the idea quite as bluntly as I did, and I say this because again, other people are aware of this idea. I only hope my writing suits the purpose.

"… the strange spiral nebulae were in fact distant galaxies, not unlike our own. The same seminar concluded, on the observation that accelerated publication rates had not produced as many major breakthroughs in the last two decades, that technology had finally caught up with observations over the electromagnetic spectrum and that we may well have seen most of what there is to see. Naturally there was some dissent. To believe that in a mere 80 years humanity can go from a relatively trivial understanding of the cosmos to complete comprehension is hubris. …"

When I did a cosmology course in 1996, one thing that worried me was that the galaxies are back in time. If I were Hubble in 1929, I wouldn’t have just written recession velocity/distance = constant with units of [time]^-1. Because of spacetime, I’d have considered velocity/time past = constant with units of [acceleration].

This is the major issue. I think it is a fact that the matter is accelerating outward, simply by Hubble’s law

v = dx/dt = Hx (Hubble’s law)

Hence, dt = dx/v

Now, acceleration

a = dv/dt = dv/(dx/v)

= v*dv/dx

= (Hx)*d(Hx)/dx

= (H^2)x

Because the universe isn’t decelerating due to gravity, H = 1/t where t is the age of the universe [if the universe were decelerating like a critical density universe, then the relationship would be H = (2/3)/t].

Hence, a = (H^2)x

= x/t^2

= c/t

= c/(x/c)

= (c^2)/x

So there’s cosmic acceleration.

Lee Smolin on page 209 of his book The Trouble with Physics:

"The next simplest thing [by guesswork dimensional analysis] is (c^2)/R [where R is radius of universe]. This turns out to be an acceleration. It is in fact the acceleration by which the rate of expansion of the universe is increasing - that is, the acceleration produced by the cosmological constant."

You can see my problem. The empirically verified Hubble law acceleration happens to be similar to the metaphysical dark energy acceleration. Result: endless confusion about the acceleration of the universe.

It would be useless to try to discuss it with Smolin or anyone else who believes that the two things are the same thing. The real acceleration of the universe is just an expression of Hubble’s law in terms of velocity/time past.

The assumed dark energy is completely different, not a real acceleration but a fictional outward acceleration which is supposed to cancel out the (also fictional!) inward acceleration due to gravity over great distances.

In fact the vital inward acceleration due to gravity which was supposed (by the mainstream) to be slowing down distant supernovas until this effect was disproved in 1998 by Perlmutter’s group, isn’t real because the enormous redshift of light from such great distances doesn’t just apply to visible light but also to the gauge bosons of gravity. For this reason, the gravity coupling constant is weakened because the exchanged energy arrives in a very severely redshifted (very low energy) condition.

So the cosmological constant (outward acceleration fiddled to be just enough to cancel out inward gravity for Perlmutter’s supernovae, so that the observations fit the lambda-CDM model) is actually spurious.

There is really no long-range gravity over distances where redshift of light is extreme, because the masses are receding and gravitons are redshifted. Hence, there is no cosmological constant, because the role of the cosmological constant is to cancel out gravitational acceleration, not to accelerate the universe.

In virtually all the popular accounts of dark energy, popularisers lie and say that the role of dark energy is to accelerate the universe.

Actually, it’s not real, and even if the dark energy theory were correct, it is a fictional acceleration made up to cancel out gravity’s effect, just like the way the Coulomb force between your bum and your chair stop you from accelerating downward at 9.8ms^{-2}.

You don’t hear people describe the normal reaction force of a chair as an upward acceleration.

So why do people refer to dark energy as the cause of cosmic acceleration? It’s really pathetic. There is an acceleration as big as the fictional dark energy acceleration, but it isn’t due to a cosmological constant or dark energy. It’s just the normal expansion of the universe, due to vector bosons. There’s too much disinformation and confusion out there for anybody to listen. My case is that you take the real acceleration, use Newton’s second law and calculate outward force F=ma, and then the 3rd law to give the inward reaction force carried by vector bosons, and you get gravity after simple calculations.

Mechanisms for the lack of gravitational deceleration at large redshifts (i.e., between gravitational charges - masses - which are relativistically receding from one another)

One thing I didn’t list in this post, which (apart from the nature of the electron as trapped negative electromagnetic field energy; see Electronics World, April 2003), is fairly comprehensive, is the first major comfirmed prediction. This prediction was published in October 1996 and confirmed in 1998.

This is the lack of gravitational retardation on the big bang at large redshifts, i.e., great distances. There are several mechanisms depending on the theory of quantum gravity you decide to use, but all dispense with the need for a cosmological constant:

if gravity is due to the direct exchange of spin-2 gravitons between two receding masses, the redshift of the gravitons weakens the effective gravitational charge (coupling constant) at big redshifts, because the energy exchanged by redshifted gravitons will be small, E = hf where h is Planck’s constant and f is frequency of received quanta.

if gravity is due to the blocking of exchange (shielding as described in detail in this post) then the small amount of receding matter beyond the distant supernovae of interest will only produce a similarly small retarding force of gravity on the supernova.

This will be cancelled approximately by the redshifted exchange radiation from the much larger mass within a shell centred on us of radius equal to the distance to the supernova. Hence, gravity will be approximately cancelled. For this mechanism, which is the one with evidence behind it, see the post (particularly its illustration) on the old blog:

Experimental evidence that quarks are confined leptons with some minor symmetry transformations

In this blog post it as shown that if in principle you could somehow have enough energy - working against the Pauli exclusion principle and Coulomb repulsion - to press three electrons together in a small space, their polarized vacuum veils would overlap, and all three electrons would share the same polarized dielectric field.

Instead of the core electric charge of each electron being shielded by a factor of 137.036, the core electric charge of each electron would be shielded by the factor 3 x 137.036, so the electric charge per electron in the closely confined triad, when seen from a long distance, would be -1/3, the downquark charge.

This argument doesn’t apply to the case of 3 closely confined positrons: the upquark charge is +2/3, not +1/3. However, Peter Woit explains very simply the complex chiral relationship between weak and electric charges in the book "Not Even Wrong": positive and negative charges are not the same because of the handedness effects which determine the weak force hypercharge.

According to electroweak unification theory, there are two symmetry properties of the electroweak unification: weak force isospin and weak force hypercharge. The massive weak force gauge bosons couple to weak isospin, while the electromagnetic force gauge boson assumed by SU(2)xU(1) couples to weak hypercharge.

There is a relationship between electric charge Q, weak isospin charge Tz and weak hypercharge Yw:

So the factor of 2 discrepancy in the vacuum polarization logic can be addressed by considering the weak force charges.

When a massive amount of energy is involved in particle collisions (interactions within a second of the big bang, for example), the leptons can be collided hard enough against Coulomb repulsion and Pauli exclusion principle force, that they approach closely.

The Pauli exclusion principle isn’t violated; what happens is that the tremendous energy just changes the quantum numbers of the weak and strong force charges (leptons well isolated have zero strong force colour charge!), so that the quantum numbers of 2 or 3 confined leptons are extended to permit them to not violate the exclusion principle.

Direct experimental evidence for the fact that leptons and quarks are different forms of the same underlying entity exists: universality. This was discovered when it was found that the weak force controlled beta decay event

Compelling further experimental evidence that quarks are just leptons with some minor symmetry transformations was analysed by Nicola Cabibbo:

"Cabibbo’s major work on the weak nuclear interaction originated from a need to explain two observed phenomena:

"the transitions between up and down quarks, between electrons and electron neutrinos, and between muons and muon neutrinos had similar amplitudes; and

"the transitions with change in strangeness had amplitudes equal to 1/4 of those with no change in strangeness.

"Cabibbo solved the first issue by postulating weak universality, which involves a similarity in the weak interaction coupling strength between different generations of particles. He solved the second issue with a mixing angle θc, now called the Cabibbo angle, between the down and strange quarks.

"After the discovery of the third generation of quarks, Cabibbo’s work was extended by Makoto Kobayashi and Toshihide Maskawa to the Cabibbo-Kobayashi-Maskawa matrix."

To clarify the previous comment further: the minor symmetry transformations which occur when you confine leptons in pairs or triads to form "quarks" are physically caused by the increased strength of the polarized vacuum, and by the ability of the pairs of short-ranged virtual particles in the field to move between the nearby individual leptons, mediating new short-ranged forces which would not occur if the leptons were isolated. The emergence of these new short ranged forces, which appear only when particles are in close proximity, is the cause of the new nuclear charges, and these charges add extra quantum numbers, explaining why the Pauli exclusion principle isn’t violated. (The Pauli exclusion simply says that in a confined system, each particle has a unique set of quantum numbers.)

On the causality of the Heisenberg uncertainty principle in its energy-time form: the momentum-distance form of the uncertainty principle is equivalent to the momentum-distance form as shown in the post, and Popper showed that the latter is causal. The mechanism for pair production by Heisenberg’s uncertainity principle with Popper’s scattering mechanism is that, at high energy (above the Schwinger threshold for pair production, i.e., electric field strengths exceeding 10^18 v/m), the flux of electromagnetic gauge bosons is sufficient to knock pairs of Dirac sea fermions out of the ground state of the vacuum temporarily. This is a bit like the photoelectric effect: photons hitting bound (Dirac sea of vacuum) particles hard enough can temporarily free them for a short period of time, when they become visible as "virtual fermions". Obviously the ground state of the Dirac sea is invisible to detection, since it doesn’t polarize. Maxwell’s error in assuming that it does polarize, without knowing Schwinger’s threshold for pair production, totally contradicts QED results on renormalization (Maxwell’s electrons would have zero electric charge seen from a long distance, because the polarization of the vacuum would be able to extend far enough - without Schwinger’s limit which corresponds to the IR cutoff - that the real electric charge of the electron would be completely cancelled out!), and so Maxwell’s displacement current is false in fields below 10^18 v/m. What happens instead of displacement current in weak electric fields is radiation exchange, which produces results that have been misinterpreted to be displacement current, as proved at:

Recession-caused redshift does work; it is empirically confirmed by redshifts from stars in different parts of rotating galaxies, etc.

There is no evidence for tired light whatsoever. The full spectrum of redshifted light is uniformly displaced to lower frequencies, which rules out most fanciful ideas (scattering of light is frequency dependent, so redshift as observed isn’t due to intergalactic dust).

c = 186 miles/second or 300 megametres/second in a vacuum, less in a medium filled with strong electromagnetic fields which slow the photon down (eg light travelling inside a block of glass).

The vacuum has some electric permittivity and magnetic permeability because it has a Dirac sea in it. The Dirac sea only produces observable pairs of charges above Schwinger’s threshold field strength of 10^18 v/m, which occurs at 1 fm from the middle of an electron (you can estimate that by simply setting Coulomb’s law equal to F = qE where E is electric field strength in v/m and q is charge).

This creates a problem for Maxwell’s theory of light, because his displacement current of vacuum gear cogs and idler wheel dynamics isn’t approximated by the real vacuum unless the electric field is above 10^18 v/m. So Maxwell doesn’t explain how radio waves propagate where the field strength is just a few v/m. The actual mechanism weak electric fields mimics the Maxwell displacement current mathematically, but is entirely different in physical processes:

If the spacetime fabric or Dirac sea was really expanding, you might expect the permittivity and permeability of the vacuum to alter over time, like the velocity of light in a block of glass increasing as the density of the glass decreases due to the glass expanding.

However, what is expanding is the matter of the universe, which is receding. There is no evidence that the spacetime fabric is expanding:

‘Popular accounts, and even astronomers, talk about expanding space. But how is it possible for space … to expand? … ‘Good question,’ says [Steven] Weinberg. ‘The answer is: space does not expand. Cosmologists sometimes talk about expanding space – but they should know better.’ [Martin] Rees agrees wholeheartedly. ‘Expanding space is a very unhelpful concept’.’ – New Scientist, 17 April 1993, pp32-3.

(The volume of spacetime expands, but the fabric of spacetime, the gravitational field, flows around moving particles as the universe expands.)

Think of a load of people running along a corridor in one direction. The air around them doesn’t end up following the people, leaving a vacuum at the end the people come from. Instead, a volume of air equal the volume of the people moves in the opposite direction to the people, filling in the displaced volume. In short, the people end up at the other end of the corridor while the air moves the opposite way and fills in the volume of space the people have vacated.

There is no reason why the gravitational field should not do the same thing. Indeed, to make it possible to use the stress-energy tensor T_{ab} we have to treat the source of the curvature mechanism as being an ideal fluid spacetime fabric:

‘… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that "flows" … A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ - Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp89-90.

This works and is a useful concept. Think about a ship in the sea (like a Dirac sea). The ship goes in one direction, and the water travels around it an fills in the void behind it. So water of volume equal to the ship goes in the opposite direction with a speed equal to the ship, and if the ship is accelerating, then the same volume of water has an acceleration equal to the ship’s acceleration but in the opposite direction (this is merely a statement of Newton’s 2nd and 3rd laws of motion: the ship to accelerate needs a forward force, and the water carries the recoil force which is equal and opposite to the forward force, like a rocket’s exhaust gas).

Evidence that the spacetime fabric does push inward as matter recedes outward is easy to obtain. The Hubble recession

v = HR

implies outward acceleration

a = dv/dt

where dt = dR/v (because v = dR/dt)

a = dv/(dR/v) = v*dv/dR

Putting in v = HR gives

a = (H^2)R

That’s the outward acceleration of the mass of the receding universe. By Newton’s 2nd law

F = ma

where m is mass of universe. That gives outward force.

By Newton’s 3rd law, there is an equal inward force, which is the graviton force and predicts the strength of gravity,

There is one major breakthrough in 20th century physics thatI have yet to touch upon, but which is nevertheless among the most important of all! This is the introduction of arXiv.org, an online repository where physicists … can publish preprints (or ‘e-prints’) of their work before (or even instead of!) submitting it to journals. …as a consequence the pace of research activity has accelerated to unheard of heights. … In fact, Paul Ginsparg, who developed arXiv.org, recently won a MacArthur ‘genius’ fellowship for his innovation. …"

But

the USA edition on its corresponding page (also page 1050) says in part:

"… Bibliography

… modern technology and innovation have vastly improved the capabilities for disseminating and retrieving information on a global scale. Specifically, there is the introduction of arXiv.org, an online repository where physicists … can publish preprints (or ‘e-prints’) of their work before (or even instead of!) submitting it to journals. …as a consequence the pace of research activity has accelerated to an unprecedented (or, as some might consider, an alarming) degree. …".

However,

the USA edition omits the laudatory reference to Paul Ginsparg that is found in the UK edition.

It seems to me that it is likely that the omission of praise of arXiv’s Paul Ginsparg and the inclusion of a reference to the work of now-blacklisted physicist Matti Pitkanen are deliberate editorial decisions.

Also, since the same phrase "… physicists … can publish preprints (or ‘e-prints’) of their work before (or even instead of!) submitting it to journals. …" appears in both editions, it seems to me that Roger Penrose favors the option of posting on arXiv without the delay (and sometimes page-charge expense) of journal publication with its refereeing system.

Therefore, a question presented by these facts seems to me to be:

What events between UK publication on July 29, 2004 and USA publication on February 22, 2005 might have influenced Roger Penrose to make the above-described changes in the USA edition ?

There are two possibly relevant events in that time frame of which I am aware:

1 - The appearance around November 2004 of the ArchiveFreedom web site at

http://documents.cern.ch/EDS/current/access/action.php?doctypes=NCP "… CERN’s Scientific Information Policy Board decided, at its meeting on the 8th October 2004, to close the EXT-series. …". Note that the CERN EXT-series had been used as a public repository for their work by some people (including me) who had been blacklisted by arXiv .

Maybe either or both of those two events influenced Roger Penrose in making the above-described changes in the USA edition.

If anyone has any other ideas as to why those changes were made, I would welcome being informed about them.

i.e., a photon itself is - in a sense - a source of photons (which travel in the transverse direction, at right angles to the direction of propagation, and produce the effects normally attributed to Maxwell’s displacement current; the only limitation being that the photon absorbs all the photons it emits as proved for the example of the two conductor transmission line with a vacuum dielectric.

However, at high energy - above the Schwinger threshold field strength for pair production in the vacuum by an electromagnetic field - Maxwell’s model is physically justified because the pairs of free charges above that ~10^18 volts/metre field strength threshold (closer than ~1 fm to the middle of an electron) really do polarize and in order to polarize they drift along the electric field lines, causing a "displacement current" in the vacuum. Maxwell’s theory is only false (or incomplete) for the mechanism of a light wave (or other phenomena requiring "displacement current" to propagate) in which the electric field strength is below ~10^18 volts/metre.

At weaker field strengths, bosonic radiation has the same effect as that due to fermion displacement currents above the IR cutoff.

Another thing to watch out for is the spin. Spin 1 means is regular spin, like a spinning loop made a strip of paper with the ends stuck together, which has no twist: a 360 degrees turn brings it back to the starting point.

Now imagine making a mark on one side of a Mobius strip and rotating it while looking only at one side of the strip while rotating. Because the Mobius strip (a looped piece of paper with a half twist in it:

http://en.wikipedia.org/wiki/M%C3%B6bius_strip ) has only one surface (not two surfaces - you can prove this by drawing a pen line along the surface of the strip, which will go right the way around ALL surfaces, with a length exactly twice the circumference of the loop!), it follows that you need to rotate it not just 360 degrees but 720 degrees to get back to any given mark on the surface.

Hence, for a 360 degree rotation, the Mobius strip will only complete 360/720 = half a rotation. This is similar to the electron, which has a half-integer spin.

The stringy graviton, assumed by the mainstream to be a spin-2 particle, is the opposite of the Mobius strip: it has twice the normal rotational symmetry instead of just half of it.

This is supposed to make the exchange of such particles result in an attractive force. One well-known (unlike the unobserved spin-2 graviton) example of such a known attractive force is the strong force, mediated between protons and neutrons in the nucleus by vector bosons or gauge bosons called pions:

The story of the strong force is this. Japanese physicists first came up with the nuclear atom in 1903, but it was dismissed because the protons would all be confined together in the nucleus with nothing to stop it blowing itself to pieces under the tremendous Coulomb repulsion of electromagnetism.

Then Geiger and Marsden discovered, from the quantitatively large amount of "backscatter" (180 degree scattering angle) of positively alpha particles hitting gold, that gold atoms must contain clusters of very intense positive electric charge which can reflect (backscatter) the amount of alpha radiation which was measure to be reflected back towards the source.

Rutherford made some calculations, showing that the results support a nuclear atom with a nucleus containing all the positive charges (the positive charge being equal to all the negative charge, in a neutral atom).

People then had the problem of explaining why the nucleus didn’t explode due to the repulsion between positively charged protons all packed together. It was obvious that there was some unknown factor at work. Either the Coulomb law didn’t hold on very small scales like the nuclear size, or else it did work but there was another force, an attractive force, capable of cancelling out the Coulomb repulsion and preventing the nucleus from decaying in a blast of radiation. The latter theory was shown to be true (obviously, some atoms - radioactive ones and particularly fissile ones like uranium-235 and plutonium-239 - are not entirely stabilized by the attractive force, and randomly decay, or can be caused to break up by being hit).

In 1932, Chadwick discovered the neutron, but because it is neutral, it didn’t help much with explaining why the nucleus didn’t explode due to Coulomb repulsion between protons.

Finally in 1935, the Japanese physicist Yukawa came up with the strong nuclear force theory in his paper "On the Interaction of Elementary Particles, I," (Proc. Phys.-Math. Soc. Japan, v17, p48), in which force is produced by the exchange of "field particles". From the uncertainty principle and the known experimentally determined nuclear size (i.e., the radius of the cross-sectional area for the nucleus to shield nuclear radiations which penetrate the electron clouds of the atom without difficulty), Yukawa calculated that the field particle would have a mass 200 times the electron’s mass.

In 1936 the muon (a lepton) was discovered with the right mass and was hyped as being the Yukawa field particle, but of course it was not the right particle and this was eventually revealed by an analysis of muon reaction rates in 1947 by Fermi, Teller, and Weisskopf: the muon mediated reactions are too slow to mediate strong nuclear forces.

In 1948 the correct Yukawa field particle, the pion, was finally discovered, with a mass of 270 electron masses.

If we take the Heisenberg uncertainty principle in its energy-time form, E*t = h-bar.

Putting in the mass-energy of the pion (140 MeV), we get a time of t = (h-bar)/E = 5*10^{-24} second.

This is the size of the nucleus (for constant density the nuclear radius only increases slowly with mass number - in proportion to the cube-root of mass number to be exact - so a plutonium atom is only about 6 times the radius of a hydrogen atom; obviously the fact that the range of the strong nuclear attractive force does not scale up in proportion to the cube-root of mass number tends to make it less effective at holding the bigger atoms together, so 100% of the very atoms are of course unstable, and decaying), and it is also the range of the IR cutoff, and of the related Schwinger limit for pair-production in the vacuum (a field strength of ~10^18 v/m occurs at about this distance from a unit charge).

He makes the point that even people like Feynman had terrible problems getting anyone to listen to a too-new idea, and he quotes Feynman’s reactions to dismissive ridicule from Teller, Dirac and Bohr at the 1948 Pocono conference:

"… My way of looking at things was completely new, and I could not deduce it from other known mathematical schemes, but I knew what I had done was right.

… For instance,

take the exclusion principle … it turns out that you don’t have to pay much attention to that in the intermediate states in the perturbation theory. I had discovered from empirical rules that if you don’t pay attention to it, you get the right answers anyway …. Teller said: "… It is fundamentally wrong that you don’t have to take the exclusion principle into account." …

… Dirac asked "Is it unitary?" … Dirac had proved … that in quantum mechanics, since you progress only forward in time, you have to have a unitary operator. But there is no unitary way of dealing with a single electron. Dirac could not think of going forwards and backwards … in time …

… Bohr … said: "… one could not talk about the trajectory of an electron in the atom, because it was something not observable." … Bohr thought that I didn’t know the uncertainty principle …

… it didn’t make me angry, it just made me realize that … [ they ] … didn’t know what I was talking about, and it was hopeless to try to explain it further.

I gave up, I simply gave up …".

What happened was that Dyson wrote a paper showing that Feynman’s approach is equivalent to Schwinger’s and Tomonaga’s [unfortunately, Dyson ignored someone whose work had preceded all of them, namely E.C.G. Stueckelberg, Annalen der Physik, vol. 21 (1934), whose paper was even harder for contemporaries to grasp than Feynman’s; people like Pauli spend great efforts trying to grasp Stueckelberg in the 1930s and gave up, which to me shows a danger in being too abstract, mathematically speaking, although some like Chris Oakley -

http://www.cgoakley.demon.co.uk/qft/ and to some degree also Peter Woit, Danny Lunsford, and others who may also have too much respect for the mathematical rigor over the physical processes being modelled mathematically - view pictorial mechanisms, and possibly solid predictions as well, as being kid’s stuff and seem to think that the more mathematically abstract, and less readily understandable, your work is to your contemporaries, the better].

"… the first seminar was a complete disaster because I tried to talk about what Feynman had been doing, and Oppenheimer interrupted every sentence and told me how it ought to have been said, and how if I understood the thing right it wouldn’t have sounded like that. He always knew everything better, and was a terribly bad organiser of seminars.

"I mean he would - he had to have the centre state for himself and couldn’t shut up [like string theorists today!], and we couldn’t tell him to shut up. So in fact, there was very little communication at all.

"Well, I felt terrible and I remember going out after this seminar and going to Cecile for consolation, and Cecile was wonderful …

"I always felt Oppenheimer was a bigoted old fool. … And then a week later I had the second seminar and it went a little bit better, but it still was pretty bad, and so I still didn’t get much of a hearing. And at that point Hans Bethe somehow heard about this and he talked with Oppenheimer on the telephone, I think. …

"I think that he had telephoned Oppy and said ‘You really ought to listen to Dyson, you know, he has something to say and you should listen. And so then Bethe himself came down to the next seminar which I was giving and Oppenheimer continued to interrupt but Bethe then came to my help and, actually, he was able to tell Oppenheimer to shut up, I mean, which only he could do. …

"So the third seminar he started to listen and then, I actually gave five altogether, and so the fourth and fifth were fine, and by that time he really got interested. He began to understand that there was something worth listening to. And then, at some point - I don’t remember exactly at which point - he put a little note in my mail box saying, ‘nolo contendere’."

According to Freeman Dyson, in his 1981 essay Unfashionable Pursuits (reprinted in From Eros to Gaia (Penguin 1992, at page 171)), "… At any particular moment in the history of science, the most important and fruitful ideas are often lying dormant merely because they are unfashionable. Especially in mathematical physics, there is commonly a lag of fifty or a hundred years between the conception of a new idea and its emergence into the mainstream of scientific thought. If this is the time scale of fundamental advance, it follows that anybody doing fundamental work in mathematical physics is almost certain to be unfashionable. …"

According to the Bohm biography Infinite Potential, by F. David Peat (Addison-Wesley 1997) at pages 101, 104, and 133:

"… when his [Bohm’s] … Princeton University … teaching … contract came up for renewal, in June [1951], it was terminated. … Renewal of his contract should have been a foregone conclusion … Clearly the university’s decison was made on political and not on academic grounds … Einstein was … interested in having Bohm work as his assistant at the Institute for Advanced Study … Oppenheimer, however, overruled Einstein on the grounds that Bohm’s appointment would embarrass him [Oppenheimer] as director of the institute. … Max Dresden … read Bohm’s papers. He had assumed that there was an error in its arguments, but errors proved difficult to detect. … Dresden visited Oppenheimer … Oppenheimer replied … "We consider it juvenile deviationism …" … no one had actually read the paper … "We don’t waste our time." … Oppenheimer proposed that Dresden present Bohm’s work in a seminar to the Princeton Institute, which Dresden did. … Reactions … were based less on scientific grounds than on accusations that Bohm was a fellow traveler, a Trotskyite, and a traitor. … the overall reaction was that the scientific community should "pay no attention to Bohm’s work." … Oppenheimer went so far as to suggest that "if we cannot disprove Bohm, then we must agree to ignore him." …".

The Schwinger correction for the magnetic moment of leptons (i.e., the first Feynman diagram coupling correction term)

In the text of this blog post I wrote:

‘There is a similarity in the physics between these vacuum corrections and the Schwinger correction to Dirac’s 1 Bohr magneton magnetic moment for the electron: corrected magnetic moment of electron = 1 + {alpha}/(2*{Pi}) = 1.00116 Bohr magnetons. Notice that this correction is due to the electron interacting with the vacuum field, similar to what we are dealing with here.’

In comment 19 above, I wrote:

‘Now imagine making a mark on one side of a Mobius strip and rotating it while looking only at one side of the strip while rotating. Because the Mobius strip (a looped piece of paper with a half twist in it:

http://en.wikipedia.org/wiki/M%C3%B6bius_strip ) has only one surface (not two surfaces - you can prove this by drawing a pen line along the surface of the strip, which will go right the way around ALL surfaces, with a length exactly twice the circumference of the loop!), it follows that you need to rotate it not just 360 degrees but 720 degrees to get back to any given mark on the surface.

‘Hence, for a 360 degree rotation, the Mobius strip will only complete 360/720 = half a rotation. This is similar to the electron, which has a half-integer spin.’

What I should have mentioned in the post is the mechanism for why the first (i.e., the Schwinger) vacuum virtual charge coupling correction term is alpha/(2*{Pi}) added to the electron’s core magnetic moment of 1 Bohr magneton (Dirac’s result).

{magnetic moment of bare electron core, neglecting the interaction picture with the surrounding virtual charges in the vacuum}*(1 + {alpha}/(2*{Pi})

because

the virtual particle which is adding to the bare core magnetic moment is shielded from the bare core by the intervening polarized vacuum, which causes the alpha (i.e., 1/137.036) dimensioness shielding factor to appear in the correction term.

In addition, there is a spin correction factor which reduces the contribution of the magnetism from the virtual charge by a factor of 2*{Pi}.

Notice that we may need to think of a Mobius strip in a particular way (comment 19, as quote above) to explain causally why the electron is spin-1/2.

The 2*Pi reduction factor is equal to the difference in exposed electron perimeter (if the electron is a loop) when seen side on and when seen along the longitudinal axis of symmetry passing through the middle of the loop.

Hence, if you look at a loop side on, you see a length (and an associated area) which is smaller by a factor of 2*{Pi} than the length and area visible when looking at the loop from above (or from below): the circumference is 2*{Pi} times the diameter.

There was a trick suggested during President Reagan’s 1980s ‘Star Wars’ (SDI project) whereby you can protect a missile partly from laser bursts by giving it a rapid spin. This means that the flux per unit area which will be received on the side of the missile from a laser (or whatever) is reduced by a factor of 2*{Pi} if the missile is spinning about its long axis, compared to the non-spinning scenario.

So there you have candidate explanations for why the first virtual particle correction to the magnetic moment of a lepton doesn’t give a total of 1 + 1 = 2 Bohr magnetons, but merely 1 + 1/(137.036*2*Pi) = 1.00116 Bohr magnetons.

Obviously, it’s a sketchy explanation. However, it becomes clearer when you look at what a black hole/trapped TEM (transverse electromagnetic) wave looks like (Electronics World, April 2003 issue): draw a circle to represent the path of propagation of the trapped TEM wave, and draw radial (outward) lines of electric field coming from that line, which are orthagonal to the direction of propagation of the TEM wave at the point they radiate from. Next, draw the magnetic field lines looping around the direction of propagation: the magnetic field lines are orthagonal to both the direction of propagation and to the electric field lines (see the Poynting-Heaviside vector). Whey you discover is that at big distances (distances many times the diameter of the electron loop), the magnetic field lines (which have a toroidal shape close to the electron) do form a magnetic dipole with the poles (radial magnetic field lines) running along the axis of rotation of the electron’s loop.

Those polar magnetic field lines which are parallel to the radial electric field lines (at big distances from the electron) are - unlike the electric field lines - not shielded by the electric polarization of the vacuum. The electric field lines are shielded simply because the electric polarization (displacement of virtual charges) opposes the electric field of the electron core by creating a radial electric field pointing in the opposite direction to that from the core of the electron (conventionally, the electric field vector or arrow points inwards, towards a negative charge, so the radial electric field created by the polarized vacuum points outward, partially cancelling the electron’s charge as seen from great distances). There is no interaction between parallel electric and magnetic field lines (if there were, they wouldn’t be separate fields; the electric and magnetic fields interact according to Maxwell’s two curl equations, which are composed of Faraday’s law and Ampere’s law plus Maxwell’s displacement current term for vacuum effects - both of which have the magnetic field and electric field at right angles for biggest interaction and show that there is never an interaction for parallel magnetic and electric force field vectors).

Hence, the polar magnetic dipole field from the electron is not shielded by the pairs of virtual charges in the vacuum (the other magnetic field lines, which are not parallel but more nearly at right angles to the the radial lines from the middle of the electron, will generally be shielded like electric field lines, of course).

This justifies why there are two important terms (neglecting the higher order corrections for other vacuum interactions, which are trivial in size), 1 + alpha/(2*{Pi}) where 1 is the bare core charge and the second term is the contribution from a vacuum charge aligning with the core.

There is obviously more to be done to illustrate this mechanism clearly with diagrams and to use the resulting simple physical principles to make predictive calculations of other vacuum interactions in far more simplified and quick way than existing mainstream methods.

In the meanwhile, two other aspects of the loop electron. Hermann Weyl points out in The Theory of Groups and Quantum Mechanics, Dover, 2nd ed., 1931, page 217:

"[In the Schroedinger equation] The charge contained in V [volume] is … capable of assuming only the values -e and 0, i.e. according to whether the electron is found in V or not. In order to guarantee the atomicity of electricity, the electric charge must equal -e times the probability density. But if we base our theory on the de Broglie wave equation, modified by introducing the electromagnetic potentials in accordance with the rule [replacing (1/i)d/dx_a by {(1/i)d/dx_Alpha} + {(e/{hc}){Phi_Alpha}], we find as the expression for the charge density one involving the temporal derivative d{Psi}/dt in addition to Psi; this expression has nothing to do with the probability density and is not even an idempotent form. According to Dirac, this is the most conclusive argument for the stand that the differential equations for the motion of an electron in an electromagnetic field must contain only first order derivatives with respect to the time. Since it is not possible to obtain such an equation with a scalar wave function which satisfies at the same time the requirement of relativistic invariance, the spin appears as a phenomenon necessitated by the theory of relativity." (Emphasis as in original.)

The simple way to link relativity to spin was published in the Electronics World Apr. 2003 article: the spin is the flow of the TEM wave energy around the loop (for an half integer spin fermion, the polarization vector changes direction as it goes around the loop, hence making the field twisted, like a Mobius strip):

Let the whole electron (TEM wave loop) propagate at velocity v along its axis of symmetry, i.e., it propagates along the axis of "spin". Let the spin speed of the TEM wave energy going around the loop be x. Since the vectors v and x are perpendicular, their resultant is given by Pythagoras’ theorem:

(v^2) + (x^2) = c^2

We let the resultant be c^2 because of the requirement of empirically confirmed electromagnetism and relativity.

A measure of time is for an electron is the spin speed (just as the measure of time for a thrown watch is how fast the hands spin around the watch face). Time, hence relative spin speed, taking time to pass at rate unity for electron propagation speed v = 0, is

t’/t = x/c = [[(c^2) - (v^2)]^{1/2}]/c

= [1 - (v/c)^2]^{1/2}

which is the usual time-dilation factor in the FitzGerald-Lorentz contraction.

To get the length contraction in the direction of motion, you can argue that the observable(distance)/(time) = c, hence to preserve this ratio any time-dilation factor must be accompanied by an identical length contraction factor. A more physical, explanation is however that the vacuum "drag" (actually not continuous drag, but just resistance to changes in speed, i.e., resistance to acceleration) causes contraction:

In 1949 a crystal-like ground state of the vacuum (i.e., at lower electric field strength that Schwinger’s threshold for pair production of 10^18 v/m), for situations below the IR cutoff energy so that there are no pairs of virtual particles forming a dissipative gas like steam coming off crystalline water - ice - by sublimation, the Dirac sea was shown to explain the length contraction and mass-energy effects. The reference is

‘It is shown that when a Burgers screw dislocation [in a crystal] moves with velocity v it suffers a longitudinal contraction by the factor [1 - (v/c)^2]^{1/2}, where c is the velocity of transverse sound. The total energy of the moving dislocation is given by the formula E = E(o)/[1 - (v/c)^2]^{1/2}, where E(o) is the potential energy of the dislocation at rest.’

‘If I accept that these theories (or "schemes") make zero predictions, do they still give me a unified description of the fundamental forces?’

Supersymmetry is the theory required to ‘unify’ forces and just does that by getting all the SM forces to have equal charges (coupling constants) near the Planck scale, presumably because that looks prettier on a graph than the SM prediction (which does not show that the 3 interaction coupling constants - for EM, weak, and strong forces - meet at a single point near the Planck scale).

The problem’s here are massive. The experimental evidence available confirms the SM, and the supersymmetric model extrapolated to low energy seems to contradict experimental results. See Woit’s book Not Even Wrong, UK ed., page 177 [using the measured weak SU(2) and electromagnetic U(1) forces, supersymmetry predicts the SU(3) force incorrectly high by 10-15%, when the experimental data is accurate to a standard deviation of about 3%].

In addition, why should the three SM forces unify by having the same strength at the Planck scale? It’s group-think galore with no solid facts behind any of it. Planck never determined the Planck scale from a solid theory, let alone observed it. It just seems to be a way to glorify his constant, h (e.g. the length 2mG/c^2 where m is electron mass is not only much smaller than the Planck length, but it is also more meaningful since i[t] is a black hole radius). Why should they meet at a point anyway?

String theory hasn’t succeeded in usefully putting gravity into the standard model. Furthermore, unlike supersymmetry which at least has been found to disagree with experiment (as mentioned), string theory says nothing remotely checkable about gravity. All it does is to allow vacuous claims like:
‘String theory has the remarkable property of predicting gravity.’ - Dr Edward Witten, M-theory originator, Physics Today, April 1996.

What he means is presumably that it predicts speculative things called gravitons, or that it might predict something about gravity, some day. Congratulations, in advance! But why can’t these string theorists admit that their ‘theory’ doesn’t even exist, and the claims made for it are just a lot of hype that help censor alternatives?

A bit more about the Weyl-suggested link between particle spin and relativity outlined in comment 22 above: the reason why a particle with spin will generally move in a direction along the axis of spin, so that the direction of spin is orthagonal to the direction of the propagation of the whole particle (relative to the surrounding "gravitational field", i.e., the absolute spacetime of general relativity, the evidence for which was discussed in an earlier comment), is that this makes the spin speed consistent at each point around the loop of the electron. If the angle between the electron’s propagation and the plane in which the electron spins is anything other than 90 degrees, the speed will vary around the circumference of the electron instead of remaining constant, and this will result in a net transmission of energy as oscillating waves (in addition to the usual equilibrium of Yang-Mills force-causing exchange radiation which constitutes the electromagnetic field).

When electrons are deflected in direction, they do indeed emit "synchrotron radiation". This is a well known effect and is well confirmed experimentally! It’s not speculation in any way. It’s a plain fact.

What I need to do now is to investigate Hawking’s radiation as a source for the electromagnetic charged gauged bosons:

"Furthermore, calculations show that Hawking radiation from electron-mass black holes has the right force as exchange radiation of electromagnetism:

Conventionally, Hawking radiation is supposed to occur when a pair of fermion-antifermion particles appears near a black hole event horizon. One of them (fermion or antifermion) falls into the black hole, while the other escapes and later annihilates with another such particle (this is the key point: there is an implicit assumption in Hawking’s theory which states that, on average, as many virtual positrons as virtual electrons escape, i.e., you get gamma rays given off because you end up with an equal number of escaped particles and escaped anti-particles, so they can all annihilate into uncharged gamma rays). So in Hawking’s theory, all the radiation gets converted into gamma radiation. Simple.

However, the black holes we’re dealing with are not the same as those Hawking’s calculations apply to. We’re dealing with fermions as black holes, which means they carry a net electric charge. This electric charge dramatically alters the selection of which one of the pair (fermion and antifermion, for example: electron and positron), falls into the black hole, and which one escapes.

We’re back to vacuum polarization and displacement current again: the particles which will be swallowed up by the black hole will tend to have an opposite charge to the black hole’s charge.

This means that a fermion black hole does not tend to produce uncharged gamma rays: the particles it allows to escape all have like sign and can’t annihilate into gamma rays.

So it is fully consistent with the mechanism in this blog post: Hawking radiation doesn’t produce gravity, it produces the charged exchange radiation we need for electromagnetism.

Any black-body at that temperature radiates 1.3*10^205 watts/m^2 (via the Stefan-Boltzmann radiation law). We calculate the spherical radiating surface area 4*Pi*r^2 for the black hole event horizon radius r = 2Gm/c^2 where m is electron mass, hence an electron has a total Hawking radiation power of

3*10^92 watts

But that’s Yang-Mills electromagnetic force exchange (vector boson) radiation. Electron’s don’t evaporate, they are in equilibrium with the reception of radiation from other radiating charges.

So the electron core both receives and emits 3*10^92 watts of electromagnetic gauge bosons, simultaneously.

The momentum of absorbed radiation is p = E/c, but in this case the exchange means that we are dealing with reflected radiation (the equilibrium of emission and reception of gauge bosons is best modelled as a reflection), where p = 2E/c.

The force of this radiation is the rate of change of the momentum, F = dp/dt ~ (2E/c)/t = 2P/c, where P is power.

Using P = 3*10^92 watts as just calculated,

F = 2P/c = 2(3*10^92 watts)/c = 2*10^84 N.

For gravity, the model in this blog post gives an inward and an outward gauge boson calculation F = 7*10^43 N.

So the force of Hawking radiation for the black hole is higher than my estimate of gravity by a factor of [2*10^84] / [7*10^43] = 3*10^40.

This figure of approximately 10^40 is indeed the ratio between the force coupling constant for electromagnetism and the force coupling constant for gravity.

So the Hawking radiation force seems to indeed be the electromagnetic force!

Electromagnetism between fundamental particles is about 10^40 times stronger than gravity.

The exact figure of the ratio depends on whether the comparison is for electrons only, electron and proton, or two protons (the Coulomb force is identical in each case, but the ration varies because of the different masses affecting the gravity force).

To lucidly clarify the distinction between a "virtual fermion" and a real (long lived) fermion, the best thing to do is to quote the example Glasstone gives in a footnote in his 1967 Sourcebook on Atomic Energy (3rd ed.):

"In the interaction of a nucleon of sufficiently high (over 300 MeV) energy with another nucleon, a virtual pion can become a real (or free) pion, i.e., at a distance greater than about 1.5*10^{-13} cm from the nucleon. Such a free pion can then be detected before it is either captured by a nucleon or decays into a muon. This is the manner by which pions are produced …"

See Figure 5 in the post to explain how two fermion-like exchange radiation components, while passing through one another due to the exchange process between charges, acquire boson-like behaviour. This explains how the Hawking-type fermion radiation constitutes vector bosons. It’s experimentally validated by transmission line logic signals which propagate like bosons if there are two conductors with opposite currents in each, see

(There are other examples of pairing of fermions to create bosons, such as the Bose-Einstein condensate. Simply get the outer electrons in two atoms to pair up coherently, and the result is like a boson. This is how you get superconductivity, frictionless fluids, and such like in low temperature physics. Vibrations, due to thermal energy, at higher temperatures destroy the coherence.)

(To be precise, superconductivity is pairing of conduction electrons into Cooper pairs of electrons. Presumably, as explained in comment 22 above, the reason why the vacuum virtual particles increase the magnetic moments of leptons the amount they do, is due to the fact that on average there is always one virtual lepton pairing with the real lepton core, allowing for a weaking in coherence due the shielding effect by the polarized vacuum, and the geometry.)

Work out the correct way that SU(2) symmetry works under the lepton => quark transformation mechanism (see, for instance, comment 13 above).

The reason why a downquark (which is the key clue for the mechanism) has -1/3 unit charge is that precisely 2/3rds of the electron charge energy is being transformed into the weak and strong forces.

It’s easy to calculate the energy of a field: the electric field strength is given by Gauss’ law, i.e., you take coulomb’s force law and put force F = qE (this is a vector equation but since the E field lines are radial and the Coulomb force acts in the radial direction, along E field lines, for our purposes it is fine to take it as a scalar as long as we are dealing with small unit charges with radial E field lines), where q is charge being acted upon and E is electric field strength in volts/metre, so E = F/q. (This means that E is identical to coulomb’s law except that there is just one term for charge in the numerator, instead of two terms for charge.)

The amount of electric field energy per unit volume at a given E strength is well known: charge up a capacitor with a known volume between the capacitor plates and measure the energy you put into it as well as the uniform E field strength in it, and you have the relationship:

Electric field energy density =

(1/2)*{electric permittivity of free space}*E^2

J/m^3.

Similarly for magnetic fields:

Magnetic field energy density =

(1/2)*{1/magnetic permeability of free space}*B^2

J/m^3.

However, there is a minor problem in that the trapped energy in a capacitor isn’t "static":

"a) Energy current can only enter a capacitor at the speed of light.

"b) Once inside, there is no mechanism for the energy current to slow down below the speed of light."

http://www.ivorcatt.com/1_3.htm (To avoid confusion in linking to Catt, I have to add the note that although there are important findings by Catt, I disagree with many of Catt’s own slipshod interpretations of the results his own research, and I note that he refuses to engage in discussions with a view to improving the material.)

If you look at the top diagram on the page

http://www.ivorcatt.com/1_3.htm , you see that electromagnetic energy in a charged object is in equilibrium containing trapped Poynting-Heaviside "energy current" (TEM wave) that oscillates in both directions, travelling through itself to, so that the B field curls oppose each other and appear to cancel, while the E fields add together.

So we get into the problem of how energy conservation applies to the situation where fields cancel out: half way between a positron and an electron, is there no field energy density, or is there equal positive and negative electromagnetic field energy? Put it like this: if you have a tug of war, and the teams are evenly matched so that the rope doesn’t move, is that the same thing as having no strain on the rope? Force fields are just like the tug of war: you only see them when they make charges accelerate. The late Dr Arnold Lynch, a leading expert in microwave beaming interference problems, pointed out to me by letter that because radiowaves are boson like radiation they can pass through each other in opposite directions and, if they are out of phase during the overlap, the fields "cancel out" totally, but the energy continues propagating and re-emerges after the overlap just as before. Hence, spacetime can contain "hidden" electromagnetic field energy.

[This has nothing to do with Young’s "explanation" of the famous double slit experiment, where he claimed that you should get light waves arriving in the dark fringes but their fields are out of sync and interfere, cancelling out. Clearly, firing one photon at a time, when you consider the need for energy conservation in the double slit experiment, you have to admit that Young’s "explanation" is just plain wrong. You can do the double slit experiment with fairly efficient photomultipliers, and nobody has found that half the photons (those that should arrive in the dark fringes in Young’s explanation) are unaccounted for.]

My reason for this diversion is that, when you integrate the energy density for the electron over radial distance using the E field of an electron, you get a result that is generally far too big.

So what traditionally is done is a normalization in which the known (or assumed) rest mass-energy of the electron is set equal to the integration result, and the latter is adjusted to give an electron radius which yields the correct answer!

This calculation yields what is known as the "classical electron radius",

Notice that this result is the same as the radius of the 0.511 MeV IR cutoff commonly used in QFT (assuming Coulomb scattering you calculate the closest approach of two electrons each with a kinetic energy of 0.511 MeV - equal to the rest-mass energy of an electron - and this distance is the classical electron radius). It is also close to the radius for the pair-production threshold electric field strength as calculated from Schwinger’s formula, so it marks the maximum range from the electron core, out to which electron-positron pairs can briefly pop into existence, introducing chaos and electric polarization (charge shielding) effects into the otherwise classical-type electric field, which is the simple field at greater distances.

The classical electron radius needs to be explained in terms of QFT: the chaotic field of virtual particles within 2.8 fm of an electron core means that it doesn’t contribute to the rest-mass energy at all. So when an electron and a positron annihilate, only part of the total energy is converted into a pair of gamma rays, so E=mc^2 is not the total energy, merely that portion of the total which is released by annihilation. "Energy" is always a problem in physics because we have to define directed useful energy from random, useless energy (the 3rd law of thermodynamics tells us that not all energy is equal; energy can only do work if it - in effect - can be used to do work, which is not true of degraded energy where you have no heat sink available).

Presumably the classical electron radius just marks the boundary of chaotic, useless energy which can’t ever be released to do useful work.

To return to the downquark charge question, the -1/3 charge of the downquark correlated to the -1 charge of the electron as discussed earlier: 2/3rds of the electron energy is transformed into hadron binding energy when leptons are transformed into quarks.

This electric charge energy is not all transformed into the strong force energy, because there is the weak force to consider as well. Chiral symmetry needs to be taken into account. Working out the full details for the correct replacement for the Higgs mass giving mechanism when replacing SU(2)xU(1) by some modified SU(2) or SU(2)xSU(2) scheme is the priority. Hopefully, SU(2) with a mass giving field giving masses to the bosons in a particular high energy zone, in such a way that chiral symmetry and the excess of matter over antimatter in the universe is explained.

Feynman writes usefully on the evidence that the weak gauge bosons (charged W’s, and neutral Z) are just variants on the photon (see Figure 5 in this blog post for the details):

"The observed couping constant for W’s is much the same as that for the photon - in the neighborhood of j [Feynman’s symbol j is related to alpha or 1/137.036… by: alpha = j^2 = 1/137.036…]. Therefore the possibility exists that the three W’s and the photon are all different aspects of the same thing. [This seems to be the case, given how the handedness of the particles allows them to couple to massive particles, explaining masses, chiral symmetry, and what is now referred to in the SU(2)xU(1) scheme as ‘electroweak symmetry breaking’.] Stephen Weinberg and Abdus Salam tried to combine quantum electrodynamics with what’s called the ‘weak interactions’ (interactions with W’s) into one quantum theory, and they did it. But if you just look at the results they get you can see the glue [Higgs mechanism problems], so to speak. It’s very clear that the photon and the three W’s [W+, W- and Wo or Zo] are interconnected somehow, but at the present level of understanding, the connection is difficult to see clearly - you can still the ’seams’ [Higgs mechanism problems] in the theories; they have not yet been smoothed out so that the connection becomes … more correct." [Emphasis added.]

Referring to Fig. 5 in this post, we see the distinction between real and virtual (gauge boson) "photons". The gauge boson type "photon" has extra polarizations. This is Feynman’s explanation in his book QED, Penguin, 1990, p120:

"Photons, it turns out, come in four different varieties, called polarizations, that are related geometrically to the directions of space and time. Thus there are photons polarized in the [spatial] X, Y, Z, and [time] T directions. (Perhaps you have heard somewhere that light comes in only two states of polarization - for example, a photon going in the Z direction can be polarized at right angles, either in the X or Y direction. Well, you guessed it: in situations where the photon goes a long distance and appears to go at the speed of light, the amplitudes for the Z and T terms exactly cancel out. But for virtual photons going between a proton and an electron in an atom, it is the T component that is the most important.)"

This accords with Figure 5 in this blog post where an electromagnetic force-carrying gauge/vector boson has important time polarization because of the exchange mechanism.

If you think it permissible to bring up the McCarthy era as an analogy to criticisms of the cosmic landscape, you may escalate the hostilities because others will draw analogies between string propaganda and the propaganda of certain historical dictatorships, etc.

The following question in my opinion can more appropriately be directed to those who popularize pseudoscience, than to those who combat it:

More about comment 22. The first coupling correction or Feynman diagram (which gives Schwinger’s alpha-squared-over-two-pi addition to Dirac’s 1 Bohr magneton for the magnetic moment of leptons) is for the electron to emit and then absorb a photon.

In order for this physical process to occur, photon is emitted by the real electron and then reflected back (absorbed and re-emitted) by one of the virtual electrons in the surrounding vacuum.

This is interesting, but remember that super-tiny black holes are just as exciting!

Uncharged black holes emit gamma radiation because virtual fermion pair production near the event horizon leads one of the pair to fall in and the other to escape and become a real particle.

So if the black hole is uncharged, on average you will get as many fermions as positrons leaking from the event horizon, which will annihilate each other to form gamma rays.

The way to really confirm black holes is to detect this radiation.

However, I’ve got two developments on this.

First, pair production doesn’t occur everywhere in space. It only occurs above minimum electric field strength calculated by Schwinger in 1948, the threshold being about 1.3*10^18 v/m (equation 8.20 in

So in order for a there to be Hawking radiation, the black hole needs to be accompanied by a strong electric field of more than 1.3*10^18 volts/metre at the event horizon. Hence, black holes much be charged to emit Hawking radiation.

"One has to note however that for non-abelian groups the curvature of the additional dimensions will not vanish, thus flat space is no longer a solution to the field equations. However, it turns out that the number of additional dimensions one needs for the gauge symmetries of the Standard Model U(1)xSU(2)xSU(3) is 1+2+4=7 [10]. Thus, together with our usual four dimensions, the total number of dimensions is D=11. Now exponentiate this finding by the fact that 11 is the favourite number for those working on supergravity, and you’ll understand why KK was dealt as a hot canditate for unification."

Thanks for a nice brief summary of the mainstream KK and supergravity idea, but that vital reference [10] in the test doesn’t occur in your list of references, which only goes to reference [9]. It’s maybe interesting that

The usual assumption of only one time-like dimension is a bit crazy! According to spacetime, distances can be described by time.

Hence, the 3 orthagonal spatial dimensions can be represented by 3 orthagonal time dimensions.

Back in 1916 when general relativity was formulated, this was no use because the universe was supposed to be static universe, but with 3 continuoysly expanding dimensions in the big bang (there’s no gravitational deceleration observable), it makes sense to relate those expanding dimensions to time, and distinguish them from the 3 spatial dimensions of matter which don’t expand because the forces holding them together are very strong. Indeed, matter or energy generally is contracted spatially in gravitation. Feynman calculated that Earth suffers a radial contraction of GM/(3c^2) = 1.5 millimetre. That really shows that spatial dimensions describing matter should be distinguished from those describing the continuous expansion of the universe. There’s no overlap, you just have 3 overall dimensions split into two parts: those that describe contractable matter and those which describe expanding space.

The one successful, peer-reviewed and published alternative to KK predicts that the cosmological constant is zero, in agreement with observations that the universe isn’t decelerating because at big redshifts (over large distances) the gravitational coupling constant falls: vector boson radiation exchanged between gravitational charges (masses) will be redshifted (depleted in energy when received) so there’s no effective gravity at high redshifts. That’s why there’s no deceleration of the universe:

Nobel Laureate Phil Anderson points out:

‘… the flat universe is just not decelerating, it isn’t really accelerating …’

Take the expanding dimensions to be time dimensions. Then the Hubble constant is not velocity/distance but velocity/time.

This gives outward acceleration for the matter in the universe, resulting in outward force:

F = ma

= m*dv/dt,

and since dR/dt = v = HR, it follows that dt = dR/(HR), so

F = m*d(HR)/{dR/(HR)}]

= m*(H^2)R.dR/dR

= mRH^2

By Newton’s 3rd law you get equal unward force - which is carried by gravitons that cause compression and curvature - and this quantitatively allows you to explain general relativity and to work out the gravitational constant G,

It’s kind of funny that falsifiable, easily checkable work based on observed facts and extending the applications of Newton’s 2nd and 3rd laws to gauge boson radiation, is so ignored by the mainstream.

Even if my mechanism and predictions are ignorable (because for instance, they were only rough calculations at first and only published in the journal Electronics World), you’d think arXiv would have taken Lunsford’s highly technical paper seriously as he was a student of David Finklestein and he published his paper in

A quick clarification of comment 32 above: fermion particle cores exchange charged vector bosons (giving electromagnetic forces) but the uncharged vector bosons that give rise to gravity are exchanged not between the cores of fermions, but between the mass-giving particles in the polarized vacuum surrounding each fermion out to a radius of about 1 fm.

‘I have sympathized with the idea of having 3 time like dimensions as well, but eventually gave up because I couldn’t make sense out of it. Does a particle move on a trajectory in all 6 dimension, and if so what happens to two particles that have a distances in time but not in space?’

Bee, thanks for giving me the opportunity to explain a bit more. The particle only moves in 3 dimensions (the curvature that corresponds to an effective extra time dimension could have a physical explanation in quantum gravity; curvature is a mathematical convenience not the whole story).

The particle doesn’t ‘move’ in any time dimension except on a graph of a spatial dimension plotted against time. Since there are 3 spatial dimensions, the simplest model is to have 3 corresponding time dimensions.

The assumption that there’s only one time dimension is the same as you would have if everything around you was radially symmetric, like living in the middle of an onion, where only changes as a function of radial distance (one effective dimension) are readily apparent: the Hubble constant, and hence the age of the universe (time)is independent of the direction you are looking in.

If the Hubble constant varied with direction then t = 1/H would vary with direction, and we’d need more than one effective time dimension to describe the universe.

The role of time as due to expansion is proved by the following:

Time requires organized work, e.g., a clock mechanism, or any other regular phenomena.

If the universe was static rather than expanding, then it would reach thermal equilibrium (heat death) so there would be no heat sinks, and no way to do organized work. All energy would be disorganised and useless.

Because it isn’t static but continuously expanding, thermal radiation received is always red-shifted, preventing thermal equilibrium being attained between received and emitted radiation, and thereby ensuring that there is always a heat sink available, so work is possible.

Hence time is dependent upon the expansion of the universe.

You can’t really measure ‘distance’ by using photons because things can move further apart (or together) while the photons are in transit.

All cosmological dimensions should - and usually are - measured as time. That’s why Herman Minkowski said in 1908:

‘The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality.’

It’s remarkable that as soon as you start thinking like this, you see that the recession of matter by Hubble’s empirical law v = HR at distance R is velocity v = dR/dt, so dt = dR/v, which can be put straight into the definition of acceleration, a = dv/dt giving us

a = dv/dt

= dv/[dR/v]

= v*dv/dR

= [HR]*d[HR]/dR

= RH^2.

So there really is an outward acceleration of the matter in the universe, which means we are forced to try to apply Newton’s 2nd (F=ma) and 3rd (equal and opposite reaction) laws of motion to this outward acceleration of matter. This process quantitatively predicts gravity as mentioned, and other stuff too. Why is this physics being censored out of arxiv in place of stringy guesswork stuff which isn’t based on empirically confirmed laws but on a belief in unobservables, and doesn’t predict a thing? It’s sad in some ways but really funny in others.

"This is exactly the problem, because it’s definitely not what we observe. Everyone of us, every measurement makes gives the same H(t).

"Plus, I’d like to know how you get stable objects like planets - another problem I stumbled across. … To make that point clear: write down Einstein’s field equations in Vacuum, spherical symmetric, solve them. In 3+1 you find Schwarzschild, what do you find? "

2. We observe 3 spatial dimensions, so we have 3 time-like corresponding dimensions:

dt_x = dx/c
dt_y = dy/c
dt_z = dz/c

The fact that time flows at the same rate is here indicated by the fact that c is a constant and doesn’t depend on the direction.

Similarly, for situations of spherical symmetry, where r is radius

dx = dr
dy = dr
dz = dr

So we can represent all three spatial dimensions by the same thing, dr. This greatly simplifies the treatment of spherically symmetric things like stars. It’s just the same with time.

3. Einstein’s 3+1 d general relativity uses the fact that time dimensions are all similar, so all 3 time dimensions can be represented by 1 time dimension in general relativity.

The Riemann metric of Minkowski spacetime is normally (in 3+1 d):

ds^2 = {Eta}(dx^2) = dx^2 + dy^2 + dz^2 – d(ct)^2

The correct full 3+3 d spacetime Riemann metric will be just the same because this is just a Pythagorean sum of line elements in three orthagonal spatial dimensions with the resultant equal to ct.

It’s the same for the Schwarzschild metric of general relativity. Two time dimensions are conveniently ignored because they are the same, and it’s more economical to treat 3 time dimensions as 1 time dimension.

The splitting of 3 spatial dimensions into 6 dimensions isn’t a real increase in the number of dimensions, just in the treatment.

It’s just more convenient to consider the parts of the 3 dimensions which are within matter (like a planet or a ruler or measuring rod) as distance-like, contractable and not expanding, while the parts of dimensions where distance is uniformly increasing are time-like due to the expansion of the universe.

Obviously, the framework of general relativity for the Schwarzschild metric isn’t affected by this. It’s only adding to general relativity a mechanism for unification and for making additional predictions about the strength of gravity, other cosmological implications, etc.

Garrett, Lubos Motl previously dismissed Lunsford’s SO(3,3) unification on the basis that time-like dimensions might form circles and violate causality. However, the time-like dimensions of spacetime are orthagonal just like the three large spatial dimensions.

It’s pretty easy to list failures that people in the past have had, and that simply aren’t applicable.

However, giving a list of objections without backing them up is helpful if you or someone reading is biased.

How can the spacetime correspondence explained above, where dt = (1/c)dx, etc., give rise to wave solutions that go faster than c? It doesn’t look as if that is the case here. Perhaps it occurs with incorrect use of extra time dimensions? Thanks.

Loop quantum gravity is the idea of applying the path integrals of quantum field theory to quantize gravity by summing over interaction history graphs in a network (such as a Penrose spin network) which represents the quantum mechanical vacuum through which vector bosons such as gravitons are supposed to travel in a standard model-type, Yang-Mills, theory of gravitation. This summing of interaction graphs successfully allows a basic framework for general relativity to be obtained from quantum gravity. The model is not as speculative as string theory, which has been actively promoted in the media since 1985 despite opposition from people like Feynman because it fails to predict anything. Despite endless hype, string theory is now in a state called ‘not even wrong’, which is less objective than the wrong theories of caloric, phlogiston, aether, flat earth, and epicycles, which were theories that tried to model some observed phenomena of heat, combustion, electromagnetism, geography, and astronomy.

String theory fails because it postulates that 6 dimensions are compactified into unobservably small manifolds in particles; these 6 dimensions need about 100 parameters to describe them, and there are 10500 or more configurations possible, each describing a different set of particles (different particles within any set arise from the different possible vibration modes or resonances of a given string). This makes it the vaguest, least falsifiable mainstream speculation ever: to make genuine predictions, the state of the extra unobserved 6-dimensions must be known, which means either building a particle accelerator the size of the galaxy and scattering particles to reveal their Planck scale nature, or eliminating the false 10500 guesses, which would take billions of years with supercomputers. There is some evidence that the spin-2 graviton assumption and supersymmetry ideas in string theory are false.

For supersymmetry, in the book Not Even Wrong (UK edition), Dr Woit explains on page 177 that - using the measured weak and electromagnetic forces - supersymmetry predicts the strong force incorrectly high by 10-15%, when the experimental data is accurate to a standard deviation of about 3%. Supersymmetry is also a disaster for increasing the number of Standard Model parameters (couping constants, masses, mixing angles, etc.) from 19 in the empirically based Standard Model to at least 125 parameters (mostly unknown!) for supersymmetry.

Supersymmetry in string theory is 10 dimensional and involves a massive supersymmetric boson as a partner for every observed fermion, just in order to make the three Standard Model forces unify at the Planck scale (which is falsely assumed to be the grain size of the vacuum just because it was the smallest size dimensional analysis gave before the electron mass was known; the black hole radius for an electron is far smaller than the Planck size).

At first glance, this 10-dimensional superstring theory for supersymmetry contradicts the 11-dimensional supergravity ideas, but this 10/11 dimensional issue was conveniently explained or excused by Dr Witten in his 1995 M-theory, which shows that you can make the case that 10-dimensional superstrings are a brane (a kind of extra-dimensional equivalent surface) on 11-dimensional supergravity, similarly to how an n - 1 = 2 dimensional area is a surface (or mem-brane) on an n = 3 dimensional object (or bulk).

On the speculative nature of conjectures concerning spin-2 gravitons, Richard P. Feynman points out in The Feynman Lectures on Gravitation, page 30, that gravitons do not have to be spin-2, which has not been observed. There is no experimental justification for that guess, which is discussed in detail

String theory predictions are not analogous to Wolfgang Pauli’s prediction of neutrinos, which was indicated by the solid experimentally-based physical facts of energy conservation and the mean beta particle energy being only about 30% of the total mass-energy lost per typical beta decay event: Pauli made a checkable prediction, Fermi developed the beta decay theory and then invented the nuclear reactor which produced enough decay in the radioactive waste to provide a strong source of neutrinos (actually antineutrinos) which tested the theory because conservation principles had made precise predictions in advance, unlike string theory’s ‘heads I win, tails you lose’ political-type, fiddled, endlessly adjustable, never-falsifiable pseudo-‘predictions’.

Worse, attempts to explain observed particle physics with string theory result in 10500 or more different vacuum states, each with its own set of particle physics. 10500 solutions is so many it eliminates falsifiability from string theory. This large number of solutions is named the ‘cosmic landscape’ because Professor Susskind claims that each solution exists in a different parallel universe, and when you plot the resulting vacuum ‘cosmological constants’ as a function of two variables, in string theory, you produce a landscape-like three dimensional graph. The reason for the immense ‘cosmic landscape’ is the fact that string theories only ‘work’ (i.e., satisfy the basic criteria for conformal field theory, CFT) in 10 or more dimensions, so the unobserved dimensions have to be ‘compactified’ by a Calabi-Yau manifold, which - conveniently - curls up the extra dimensions in to a small volume, explaining why nobody has ever observed any of them. In superstring theory, two dimensions (one space and one time) form a ‘worldsheet’ and another eight are required for the CFT of supersymmetric particle physics. Sadly, the Calabi-Yau manifold has many parameters (or moduli) describing size and shape of those unobserved conjectured extra dimensions which must have unknown values (since we can’t observe them), so it is the immense number of possible combinations of these unknown parameters which make string theory fail to produce specific results, by producing too many results to ever rigorously evaluate, even given a supercomputer running for the age of the universe. The 10500 figure might not be right: the true figure might be infinity. String theory results depend on many things, e.g., how the moduli are ‘stabilized’ by ‘Rube-Goldberg machines’, monstrous constructions added to the theory

However, the best idea of how to go about this is to assume that cosmology is correctly modelled by the Lambda-CDM general relativity solution, which attributes the observed lack of gravitational deceleration in the universe to dark energy, represented by a small positive cosmological constant in general relativity field equations. Then you can try to evaluate parts of the landscape of solutions to string theory which have a suitably small positive cosmological constant. Unfortunately, general relativity does not include quantum gravity, and even the mainstream quantum gravity candidate of an attractive force mediated by spin-2 gravitons, implies that gravity should be weakened over vast distances due to redshift of gravitons exchanged between receding masses, which lowers the energy of the gravitons received in interactions and reduces the coupling constant for gravity. Thus, dark energy may be superfluous if quantum gravity is correct, so it is clear that string theory is really a belief system, a faith-based initiative, with no physics or science of any kind to support it. String theory produces endless research, and inspires new mathematical ideas, albeit less impressively than Ptolemy’s universe, Maxwell’s aether and Kelvin’s vortex atom (e.g., the difficulties of solving Ptolemy’s false epicycles inspired Indian and Arabic mathematicians to develop trigonometry and algebra in the dark ages), but this doesn’t justify Ptolemy’s earth-centred universe, Maxwell’s mechanical aether, Kelvin’s stable vortex atom, and string theory. Another problem of this stringy mainstream research is that it leads to so many speculative papers being published in physics journals that the media and the journals concentrate on strings, and generally either censor out or give less attention to alternative ideas. Even if many alternative theories are wrong, that may be less harmful to the health of physics than one massive mainstream endeavour that isn’t even wrong...

‘It always bothers me that, according to the laws as we understand them today, it takes a computing machine an infinite number of logical operations to figure out what goes on in no matter how tiny a region of space, and no matter how tiny a region of time. How can all that be going on in that tiny space? Why should it take an infinite amount of logic to figure out what one tiny piece of spacetime is going to do? So I have often made the hypothesis that ultimately physics will not require a mathematical statement, that in the end the machinery will be revealed, and the laws will turn out to be simple, like the chequer board with all its apparent complexities.’ - R. P. Feynman, The Character of Physical Law, November 1964 Cornell Lectures, broadcast and published in 1965 by BBC, pp. 57-8.

Feynman is here referring to the physics of the infinite series of Feynman diagrams with corresponding terms in the perturbative expansion for interactions with virtual particles in the vacuum in quantum field theory:

‘Given any quantum field theory, one can construct its perturbative expansion and (if the theory can be renormalised), for anything we want to calculate, this expansion will give us an infinite sequence of terms. Each of these terms has a graphical representation called a Feynman diagram, and these diagrams get more and more complicated as one goes to higher and higher order terms in the perturbative expansion. There will be some ... ‘coupling constant’ ... related to the strength of the interactions, and each time we go to the next higher order in the expansion, the terms pick up an extra factor of the coupling constant. For the expansion to be at all useful, the terms must get smaller and smaller fast enough ... Whether or not this happens will depend on the value of the coupling constant.’ - P. Woit, Not Even Wrong, Jonathan Cape, London, 2006, p. 182.

‘For the last eighteen years particle theory has been dominated by a single approach to the unification of the Standard Model interactions and quantum gravity. This line of thought has hardened into a new orthodoxy that postulates an unknown fundamental supersymmetric theory involving strings and other degrees of freedom with characteristic scale around the Planck length. ... It is a striking fact that there is absolutely no evidence whatsoever for this complex and unattractive conjectural theory. There is not even a serious proposal for what the dynamics of the fundamental ‘M-theory’ is supposed to be or any reason at all to believe that its dynamics would produce a vacuum state with the desired properties. The sole argument generally given to justify this picture of the world is that perturbative string theories have a massless spin two mode and thus could provide an explanation of gravity, if one ever managed to find an underlying theory for which perturbative string theory is the perturbative expansion.’ – P. Woit,

‘50 points for claiming you have a revolutionary theory but giving no concrete testable predictions.’ - J. Baez (crackpot Index originator), comment about crackpot mainstream string ‘theorists’ on the Not Even Wrong weblog

Quantum field theory is the basis of the Standard Model of particle physics and is the best tested of all physical theories, more general in application and better tested within its range of application than the existing formulation of general relativity (which needs modification to include quantum field effects), describing all electromagnetic and nuclear phenomena. The Standard Model does not as yet include quantum gravity, so it is not a replacement yet for general relativity. However, the elements of quantum gravity may be obtained from an application of quantum field to a Penrose spin network model of spacetime (the path integral is the sum over all interaction graphs in the network, and this yields background independent general relativity). This approach, 'loop quantum gravity', is entirely different from that in string theory, which is based on building extra-dimensional speculation upon other speculations, e.g., the speculation that gravity is due to spin-2 gravitons (this is speculative is no experimental evidence for it). In loop quantum gravity, by contrast to string theory, the aim is merely to use quantum field theory to derive the framework of general relativity as simply as possible. Other problems in the Standard Model are related to understanding how electroweak symmetry is broken at low energy and how mass (gravitational charge) is acquired by some particles. There are several forms of speculated Higgs field which may rise to mass and electroweak symmetry breaking, but the details as yet unconfirmed by experiment (the Large Hadron Collider may do it). Moreover, there are questions about how the various parameters of the Standard Model are related, and the nature of fundamental particles (string theory is highly speculative, and there are other possibilities).

There are several excellent approaches to quantum field theory: at a popular level there is Wilczek’s 12-page discussion of

Ryder’s Quantum Field Theory also contains supersymmetry unification speculations and is available on Amazon

here. Kaku has a book on the subject here, Weinberg has one here, Peskin and Schroeder's is here, while Einstein's scientific biographer, the physicist Pais, has a history of the subject here. Baez, Segal and Zhou have an algebraic quantum field theory approach available on http://math.ucr.edu/home/baez/bsz.html, while Dr Peter Woit has a link to handwritten quantum field theory lecture notes from Sidney Coleman's course which is widely recommended, here. For background on representation theory and the Standard Model see Woit's page here for maths background and also his detailed suggestion, http://arxiv.org/abs/hep-th/0206135. For some discussion of quantum field theory equations without the interaction picture, polarization, or renormalization of charges due to a physical basis in pair production cutoffs at suitable energy scales, see Dr Chris Oakley's page http://www.cgoakley.demon.co.uk/qft/:

‘... renormalization failed the "hand-waving" test dismally.

‘This is how it works. In the way that quantum field theory is done - even to this day - you get infinite answers for most physical quantities. Are we really saying that particle beams will interact infinitely strongly, producing an infinite number of secondary particles? Apparently not. We just apply some mathematical butchery to the integrals until we get the answer we want. As long as this butchery is systematic and consistent, whatever that means, then we can calculate regardless, and what do you know, we get fantastic agreement between theory and experiment for important measurable numbers (the anomalous magnetic moment of leptons and the Lamb shift in the Hydrogen atom), as well as all the simpler scattering amplitudes. ...

‘As long as I have known about it I have argued the case against renormalization. [What about the physical mechanism of virtual fermion polarization in the vacuum, which explains the case for a renormalization of charge since this electric polarization results in a radial electric field that opposes and hence shields most of the core charge of the real particle, and this shielding due to polarization occurs wherever there are pairs of charges that are free and have space to become aligned against the core electric field, i.e. in the shell of space around the particle core that extends in radius between a minimum radius equal to the grain size of the Dirac sea - i.e. the UV cutoff - and an outer radius of about 1 fm which is the range at which the electric field strength is Schwinger's threshold for pair-production (i.e. the IR cutoff)? This renormalization mechanism has some physical evidence in several experiments, e.g., Levine, Koltick et al., Physical Review Letters, v.78, no.3, p.424, 1997, where the observable electric charge of leptons does indeed increase as you get closer to the core, as seen in higher energy scatter experiments.] ...

‘[Due to Haag’s theorem] it is not possible to construct a Hamiltonian operator that treats an interacting field like a free one. Haag's theorem forbids us from applying the perturbation theory we learned in quantum mechanics to quantum field theory, a circumstance that very few are prepared to consider. Even now, the text-books on quantum field theory gleefully violate Haag's theorem on the grounds that they dare not contemplate the consequences of accepting it.

‘With regard to the first thing, I doubt if this has been done before in the way I have done it

3, but the conclusion is something that some may claim is obvious: namely that local field equations are a necessary result of fields commuting for spacelike intervals. Some call this causality, arguing that if fields did not behave in this way, then the order in which things happen would depend on one's (relativistic) frame of reference. It is certainly not too difficult to see the corollary: namely that if we start with local field equations, then the equal-time commutators are not inconsistent, whereas non-local field equations could well be. This seems fine, and the spin-statistics theorem is a useful consequence of the principle. But in fact this was not the answer I really wanted as local field equations lead to infinite amplitudes. It could be that local field equations with the terms put into normal order - which avoid these infinities - also solve the commutators, but if they do then there is probably a better argument to be found than the one I give in this paper. ...

‘With regard to the second thing, the matrix elements consist of transients plus contributions which survive for large time displacements. The latter turns out to be exactly that which would be obtained by Feynman graph analysis. I now know that - to some extent - I was just revisiting ground already explored by Källén and Stueckelberg

‘Unfortunately for me, though, most practitioners in the field appear not to be bothered about the inconsistencies in quantum field theory, and regard my solitary campaign against infinite subtractions at best as a humdrum tidying-up exercise and at worst a direct and personal threat to their livelihood. I admit to being taken aback by some of the reactions I have had. In the vast majority of cases, the issue is not even up for discussion.

‘The explanation for this opposition is perhaps to be found on the physics Nobel prize web site. The five prizes awarded for quantum field theory are all for work that is heavily dependent on renormalization. ...

‘Although by these awards the Swedish Academy is in my opinion endorsing shoddy science, I would say that, if anything, particle physicists have grown to accept renormalization more rather than less as the years have gone by. Not that they have solved the problem: it is just that they have given up trying. Some even seem to be proud of the fact, lauding the virtues of makeshift "effective" field theories that can be inserted into the infinitely-wide gap defined by infinity minus infinity. Nonetheless, almost all concede that things could be better, it is just that they consider that trying to improve the situation is ridiculously high-minded and idealistic. ...

‘The other area of uncertainty is, to my mind, the ‘strong’ nuclear force. The quark model works well as a classification tool. It also explains deep inelastic lepton-hadron scattering. The notion of quark "colour" further provides a possible explanation, inter alia, of the tendency for quarks to bunch together in groups of three, or in quark-antiquark pairs. It is clear that the force has to be strong to overcome electrostatic effects. Beyond that, it is less of an exact science. Quantum chromodynamics, the gauge theory of quark colour is the candidate theory of the binding force, but we are limited by the fact that bound states cannot be done satisfactorily with quantum field theory. The analogy of calculating atomic energy levels with quantum electrodynamics would be to calculate hadron masses with quantum chromodynamics, but the only technique available for doing this - lattice gauge theory - despite decades of work by many talented people and truly phenomenal amounts of computer power being thrown at the problem, seems not to be there yet, and even if it was, many, including myself, would be asking whether we have gained much insight through cracking this particular nut with such a heavy hammer.’

http://arxiv.org/abs/hep-th/9912205, the first chapters of which consist of a very nice introduction to the technical mathematical background of experimentally validated quantum field theory (it also has chapters on speculative supersymmetry and speculative string theory toward the end).

here, and Meinard Kuhlmann has an essay on it for the Stanford Encyclopedia of Philosophyhere.

‘In loop quantum gravity, the basic idea is to use the standard methods of quantum theory, but to change the choice of fundamental variables that one is working with. It is well known among mathematicians that an alternative to thinking about geometry in terms of curvature fields at each point in a space is to instead think about the holonomy [whole rule] around loops in the space. The idea is that in a curved space, for any path that starts out somewhere and comes back to the same point (a loop), one can imagine moving along the path while carrying a set of vectors, and always keeping the new vectors parallel to older ones as one moves along. When one gets back to where one started and compares the vectors one has been carrying with the ones at the starting point, they will in general be related by a rotational transformation. This rotational transformation is called the holonomy of the loop. It can be calculated for any loop, so the holonomy of a curved space is an assignment of rotations to all loops in the space.’ - P. Woit, Not Even Wrong, Jonathan Cape, London, 2006, p189. (Emphasis added.)

‘Plainly, there are different approaches to the five fundamental problems in physics.’ –

The major problem today seems to be that general relativity is fitted to the big bang without applying corrections for quantum gravity which are important for relativistic recession of gravitational charges (masses): the redshift of gravity causing gauge boson radiation reduces the gravitational coupling constant G, weakening long range gravitational effects on cosmological distance scales (i.e., between rapidly receding masses). This mechanism for a lack of gravitational deceleration of the universe on large scales (high redshifts) has counterparts even in alternative push-gravity graviton ideas, where gravity - and generally curvature of spacetime - is due to shielding of gravitons (in that case, the mechanism is more complicated, but the effect still occurs).

Professor Carlo Rovelli’s Quantum Gravity is an excellent background text on loop quantum gravity, and is available in PDF format as an early draft version online at

http://www.cpt.univ-mrs.fr/~rovelli/book.pdf and in the final published version from Amazon here. Professor Lee Smolin also has some excellent online lectures about loop quantum gravity at the Perimeter Institute site, here (you need to scroll down to 'Introduction to Quantum Gravity' in the left hand menu bar). Basically, Smolin explains that loop quantum gravity gets the Feynman path integral of quantum field theory by summing all interaction graphs of a Penrose spin network, which amounts to general relativity without a metric (i.e., background independent). Smolin also has an arXiv paper, An Invitation to Loop Quantum Gravity, here which contains a summary of the subject from the existing framework of mathematical theorems of special relevance to the more peripherial technical problems in quantum field theory and general relativity.

However, possibly the major future advantage of loop quantum gravity will be as a Yang-Mills quantum gravity framework, with the physical dynamics implied by gravity being caused by full cycles or complete loops of exchange radiation being exchanged between gravitational charges (masses) which are receding from one another as observed in the universe. There is a major difference between the chaotic space-time annihilation-creation massive loops which exist between the IR and UV cutoffs, i.e., within 1 fm distance from a particle core (due to chaotic loops of pair production/annihilation in quantum fields), and the more classical (general relativity and Maxwellian) force-causing exchange/vector radiation loops which occur outside the 1 fm range of the IR cutoff energy (i.e., at lower energy than the closest approach - by Coulomb scatter - of electrons in collisions with a kinetic energy similar to the rest mass-energy of the particles).

‘Light ... ‘smells’ the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)’ - R. P. Feynman, QED, Penguin, 1990, page 54

.

Solution to a problem with general relativity: A Yang-Mills mechanism for quantum field theory exchange-radiation dynamics, with prediction of gravitational strength, space-time curvature, Standard Model parameters for all forces and particle masses, and cosmology, including comparisons to other research and experimental tests

Acknowledgement

Professor Jacques Distler of the University of Texas inspired recent reformulations by suggesting in a comment on Professor Clifford V. Johnson’s discussion blog that I’d be taken more seriously if only I’d only use tensor analysis in discussing the mathematical physics of general relativity.

Part 1: Summary of experimental and theoretical evidence, and comparison of theories

Part 2: The mathematics and physics of general relativity [Currently this links to a paper by Drs. Baez and Bunn]

Part 4: Quantum mechanics, Dirac’s equation, spin and magnetic moments, pair-production, the polarization of the vacuum above the IR cutoff and it’s role in the renormalization of charge and mass [Currently this links to Dyson's QED introduction]

Part 5: The path integral of quantum electrodynamics, compared with Maxwell’s classical electrodynamics [Currently this links to Siegel's Fields, which covers a large area in depth, one gem for example is that it points out that the 'mass' of a quark is not a physical reality, firstly because quarks can't be isolated and secondly because the mass is due to the vacuum particles in the strong field surrounding the quarks anyway]

Part 6: Nuclear and particle physics, Yang-Mills theory, the Standard Model, and representation theory [Currently this links to Woit's very brief Sketch showing how simple low-dimensional modelling can deliver particle physics, which hopefully will turn into a more detailed, and also slower-paced, technical book very soon]

Part 7: Methodology of doing science: predictions and postdictions of the theory based purely on empirical facts (vacuum mechanism for mass and electroweak symmetry breaking at low energy, including Hans de Vries’ and Alejandro Rivero’s ‘coincidence’) [Currently this links to Alvarez-Gaume and Vazquez-Mozo, Introductory Lectures on Quantum Field Theory]

Part 8: Riofrio’s and Hunter’s equations, and Lunsford’s unification of electromagnetism and gravitation [Currently this links to Lunsford's paper]

‘I like Feynman’s argument very much (although I have not thought about the virtual charges in the loops bit bit). The general idea that you start with a double slit in a mask, giving the usual interference by summing over the two paths... then drill more slits and so more paths... then just drill everything away... leaving only the slits... no mask. Great way of arriving at the path integral of QFT.’ - Prof. Clifford V. Johnson's comment,

‘The world is not magic. The world follows patterns, obeys unbreakable rules. We never reach a point, in exploring our universe, where we reach an ineffable mystery and must give up on rational explanation; our world is comprehensible, it makes sense. I can’t imagine saying it better. There is no way of proving once and for all that the world is not magic; all we can do is point to an extraordinarily long and impressive list of formerly-mysterious things that we were ultimately able to make sense of. There’s every reason to believe that this streak of successes will continue, and no reason to believe it will end. If everyone understood this, the world would be a better place.’ – Prof. Sean Carroll,

THE ROAD TO REALITY: A COMPREHENSIVE GUIDE TO THE LAWS OF THE UNIVERSE by Sir Roger Penrose, published by Jonathan Cape, London, 2004. The first half of the 1094 pages hardback book (2.5 inches/6.5 cm thick) briefly summarises fairly well known mathematics of background importance to the subject at issue. The remaining half of the book deals with quantum mechanics and attempts to unify it with general relativity. On page 785, Penrose neatly quotes his co-author Professor Stephen Hawking:

‘I don’t demand that a theory correspond to reality because I don’t know what it is. Reality is not a quality you can test with litmus paper. All I’m concerned with is that the theory should predict the results of measurements.’ [Quoted from: Stephen Hawking in S. Hawking and R. Penrose, The Nature of Space and Time, Princeton University Press, Princeton, 1996, p. 121.]

But acidity is a reality which you can, indeed, test with litmus paper! On page 896, Penrose analyses those who use string ‘theory’ as an obfuscation (or worse) of the meaning of ‘prediction’:

‘In the words of Edward Witten [E. Witten, ‘Reflections on the Fate of Spacetime’, Physics Today, April 1996]:

‘String theory has the remarkable property of predicting gravity,

‘and Witten has further commented:

‘the fact that gravity is a consequence of string theory is one of the greatest theoretical insights ever.

‘It should be emphasised, however, that in addition to the dimensionality issue, the string theory approach is (so far, in almost all respects) restricted to being merely a perturbation theory …’

Hence, string ‘theory’ as hyped up by genius Witten in 1996 as predicting gravity, is misleading, really. String ‘theory’ has no proof of a physical mechanism and predicts not even the inverse square law, let alone the strength of gravity! (In apt words of exclusion-principle proposer Wolfgang Pauli, string ‘theory’ is in the class of belief junk, ‘not even wrong’.)

On page 1020 of chapter 34 ‘Where lies the road to reality?’, 34.4 Can a wrong theory be experimentally refuted?, Penrose says: ‘One might have thought that there is no real danger here, because if the direction is wrong then the experiment would disprove it, so that some new direction would be forced upon us. This is the traditional picture of how science progresses. Indeed, the well-known philosopher of science [Sir] Karl Popper provided a reasonable-looking criterion [K. Popper, The Logic of Scientific Discovery, 1934] for the scientific admissability [sic; mind your spelling Sir Penrose or you will be dismissed as a loony: the correct spelling is admissibility] of a proposed theory, namely that it be observationally refutable. But I fear that this is too stringent a criterion, and definitely too idealistic a view of science in this modern world of "big science".’

Penrose identifies the problem clearly on page 1021: ‘We see that it is not so easy to dislodge a popular theoretical idea through the traditional scientific method of crucial experimentation, even if that idea happened actually to be wrong. The huge expense of high-energy experiments, also, makes it considerably harder to test a theory than it might have been otherwise. There are many other theoretical proposals, in particle physics, where predicted particles have mass-energies that are far too high for any serious possibility of refutation.’

On page 1026, Penrose points out: ‘In the present climate of fundamental research, it would appear to be much harder for individuals to make substantial progress than it had been in Einstein’s day. Teamwork, massive computer calculations, the pursuing of fashionable ideas – these are the activities that we tend to see in current research. Can we expect to see the needed fundamentally new perspectives coming out of such activities? This remains to be seen, but I am left somewhat doubtful about it. Perhaps if the new directions can be more experimentally driven, as was the case with quantum mechanics in the first third of the 20th century, then such a "many-person" approach might work.’

Last Updated: 9 June 2007. This site is currently under revision, for older material see

This page was revised (mainly by corrections to the discussion of tensors in general relativity) on 8 June 2007 and is a totally disorganised, rambling, informal supplement to (not a replacement of) the more concise proof paper at:

‘A theorem is only as good as the assumptions underlying it. … particularly in more speculative subject, like Quantum Gravity, it’s simply a mistake to think that greater rigour can substitute for physical input. The idea that somehow, by formulating things very precisely and proving rigourous theorems, correct physics will eventually emerge simply misconstrues the role of rigour in Physics.’

Jacques also summarises the issues for theoretical physics clearly in a

‘There’s the issue of the theorem itself, and whether the assumptions that went into it are physically-justified.

‘There’s the issue of a certain style of doing Physics which values proving theorems over other ways of arriving at physical knowledge.

‘There’s the rhetorical use to which the (alleged) theorem is put, in arguing for or against some particular approach. In particular, there’s the unreflective notion that a theorem trumps any other sort of evidence.’

Take Newton’s gravity law as an example. Newton never expressed a gravity formula with the constant G because he didn't know what the constant was (that was measured by Cavendish much later).

Newton did have empirical evidence, however, for the inverse square law. He knew the earth has a radius of 4000 miles and the moon is a quarter of a million miles away, hence by inverse-square law, gravity should be (4000/250,000)2 = 3900 times weaker at the moon than the 32 ft/s/s at earth's surface. Hence the gravity acceleration due to the earth's mass at the moon is 32/3900 = 0.008 ft/s/s.

Newton’s formula for the centripetal acceleration of the moon is: a = v2 /(distance to moon), where v is the moon's orbital velocity, v = 2

So Newton had evidence that the gravity from the earth at moon's radius is approximately the same (0.008 ft/s/s ~ 0.0096 ft/s/s) as the centripetal force for the moon.

The naïve application of general relativity to a so-called ‘flat’ spacetime cosmology (one which is just balanced between eventual collapse and eternal expansion, so that the expansion rate is forever falling) gives rise to the Friedmann equation (ignoring the small effect of the

pseudo dark energy and its pseudo cosmological constant Lambda): density, r = (3/8)H2/(8p G). In this model the retarding effect of gravity is to make the expanding radius of the matter universe proportional to the two thirds power of time: R ~ t2/3, with the current age of the universe t = (2/3)/H, where H is Hubble parameter given by H = v/R. This falsely assumes that gravity is actually slowing down the expansion of the universe, which is why the 2/3 fraction is there. However, experimental evidence shows that there is no gravitational retardation. So the correct age of the universe is t = 1/H, and the correct expansion rate is as R ~ t, not as R ~ t2/3.

The reason for the lack of observed gravitational retardation is ‘explained’ by the ad hoc value of the

epicycle of dark energy (which powers the cosmological constant) in the quantum vacuum. However, the first observations of this came in 1998, and in 1996 Electronics World had published a paper with the non-ad hoc prediction that expansion powers gravitation and expansion is not retarded by gravitation. Therefore this successful prediction should be impressive, as is the fact that the actual value for the universal gravitational constant G and various other parameters can be obtained by this mechanism and its extensions to other forces. However, it was removed from the arXiv.org server within a few seconds, without being read.

I’ve explained there to Dr

’string-hype Haelfix’ that people should be working on non-rigorous areas like the derivation of the Hamiltonian in quantum mechanics, which would increase the rigour of theoretical physics, unlike string. I earlier explained this kind of thing (the need for checkable research not speculation about unobservables) in the October 2003 Electronics World issue opinion page, but was ignored, so clearly I need to move on to stronger language because stringers don’t listen to such polite arguments as those I prefer using! Feynman writes in QED, Penguin, London 1985:

‘When a photon comes down, it interacts with electrons throughout the glass, not just on the surface. The photon and electrons do some kind of dance, the net result of which is the same as if the photon hit only the surface.’

There is already a frequency of oscillation in the photon before it hits the glass, and in the glass due to the sea of electrons interacting via Yang-Mills force-causing radiation. If the frequencies clash, the photon can be reflected or absorbed. If they don’t interfere, the photon goes through the glass. Some of the resonate frequencies of the electrons in the glass are determined by the exact thickness of the glass, just like the resonate frequencies of a guitar string are determined by the exact length of the guitar string. Hence the precise thickness of the glass controls some of the vibrations of all the electrons in it, including the surface electrons on the edges of the glass. Hence, the precise thickness of the glass determines the amplitude there is for a photon of given frequency to be absorbed or reflected by the front surface of the glass. It is indirect in so much as the resonance is set up by the thickness of the glass long before the photon even arrives (other possible oscillations, corresponding to a non-integer value of the glass thickness as measured in terms of the number of wavelengths which fit into that thickness, are killed off by interference, just as a guitar string doesn’t resonate well at non-natural frequencies).

What has happened is obvious: the electrons have set up a equilibrium oscillatory state dependent upon the total thickness before the photon arrives. There is nothing to this: consider how a musical instrument works, or even just a simple tuning fork or solitary guitar string. The only resonate vibrations are those which contain an integer number of wavelengths. This is why metal bars of different lengths resonate at different frequencies when struck. Changing the length of the bar slightly, completely alters its resonance to a given wavelength! Similarly, the photon hitting the glass has a frequency itself. The electrons in the glass as a whole are all interacting (they’re spinning and orbiting with centripetal accelerations which cause radiation emission, so all are exchanging energy all the time which is the force mechanism in Yang-Mills theory for electromagnetism), so they have a range of resonances that is controlled by the number of integer wavelengths which can fit into the thickness of the glass, just as the range of resonances of a guitar string are determined by the wavelengths which fit into the string length resonately (ie, without suffering destructive interference).

Hence, the thickness of the glass pre-determines the amplitude for a photon of given frequency to be either absorbed or reflected. The electrons at the glass surface are already oscillating with a range of resonate frequencies depending on the glass thickness, before the photon even arrives. Thus, the photon is reflected (if not absorbed) only from the front face, but it’s probability of being reflected is dependent on the total thickness of the glass. Feynman also writes:

‘when the space through which a photon moves becomes too small (such as the tiny holes in the screen) … we discover that … there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that … interference becomes very important.’

More about this

here (in the comments; but notice that Jacques’ final comment on the thread of discussion about rigour in quantum mechanics is discussed by me here), here, and here. In particular, Maxwell’s equations assume that real electric current is dQ/dt which is a continuous equation being used to represent a discontinuous situation (particulate electrons passing by, Q is charge): it works approximately for large numbers of electrons, but breaks down for small numbers passing any point in a circuit in a second! It is a simple mathematical error, which needs correcting to bring Maxwell’s equations into line with modern quantum field theory. A more subtle error in Maxwell’s equations is his ‘displacement current’ which is really just a Yang-Mills force-causing exchange radiation as explained in the previous post and on my other blog here. This is what people should be working on to derive the Hamiltonian: the Hamiltonian in both Schroedinger’s and Dirac’s equations describes energy transfers as wavefunctions vary in time, which is exactly what the corrected Maxwell ‘displacement current’ effect is all about (take the electric field here to be a relative of the wavefunction). I’m not claiming that classical physics is right! It is wrong! It needs to be rebuilt and its limits of applicability need to be properly accepted:

Bohr simply wasn’t aware that Poincare chaos arises even in classical systems with 2+ bodies, so he foolishly sought to invent metaphysical thought structures (complementarity and correspondence principles) to isolate classical from quantum physics. This means that chaotic motions on atomic scales can result from electrons influencing one another, and from the randomly produced pairs of charges in the loops within 10^{-15} m from an electron (where the electric field exceeds about 10^20 v/m) causing deflections. The failure of determinism (ie closed orbits, etc) is present in classical, Newtonian physics. It can’t even deal with a collision of 3 billiard balls:

‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’

The Hamiltonian time evolution should be derivedrigorously from the empirical facts of electromagnetism: Maxwell’s ‘displacement current’ describes energy flow (not real charge flow) due to a time-varying electric field. Clearly it is wrong because the vacuum doesn’t polarize below the IR cutoff which corresponds to 10^20 volts/metre, and you don’t need that electric field strength to make capacitors, radios, etc. work.

So you could derive the Schroedinger from a corrected Maxwell ‘displacement current’ equation. This is just an example of what I mean by deriving the Schroedinger equation. Alternatively, a computer Monte Carlo simulation of electrons in orbit around a nucleus, being deflected by pair production in the Dirac sea, would provide a check on the mechanism behind the Schroedinger equation, so there is a second way to make progress

Woit gives an example of how representation theory can be used in low dimensions to reduce the entire Standard Model of particle physics into a simple expression of Lie spinors and Clifford algebra on page 51 of his paper

http://arxiv.org/abs/hep-th/0206135. This is a success in terms of what Wigner wants (see the top of this post for the vital quote from Wiki), and there is then the issue of the mechanism for electroweak symmetry breaking, for mass/gravity fields, and for the 18 parameters of the Standard Model. These are not extravagant, seeing that the Standard Model has made thousands of accurate predictions with them, and all of those parameters are either already or else in principle mechanistically predictable by the causal Yang-Mills exchange radiation effects model and a causal model of renormalization and gauge boson energy-sharing based unification (see previous posts on this blog, and the links section in the ‘about’ section on the right hand side of this blog for further information).

Additionally, Woit

stated other clues of chiral symetry: ‘The SU(2) gauge symmetry is supposed to be a purely internal symmetry, having nothing to do with space-time symmetries, but left and right-handed spinors are distinguished purely by their behavior under a space-time symmetry, Lorentz symmetry. So SU(2) gauge symmetry is not only spontaneously broken, but also somehow knows about the subtle spin geometry of space-time.’

For the background to

Lie spinors and Clifford algebras, Baez has an interesting discussion of some very simple Lie algebra physics here and here, and representation theory here, Woit has extensive lecture notes here, and Tony Smith has a lot of material about Clifford algebras here and spinors here. The objective to have is a simple unified model to represent the particle which can explain the detailed relationship between quarks and leptons and predict things about unification which are checkable. The short range forces for quarks are easily explained by a causal model of polarization shielding by lepton-type particles in proximity (pairs or triads of ‘quarks’ form hadrons, and the pairs or triads are close enough to all share the same polarized vacuum veil to a large extent, which makes the poalrized vacuum generally stronger so that the effective long-range electromagnetic charge per ‘quark’ is reduced to a fraction of that for a lepton which consists of only one core charge: see this comment on Cosmic Variance blog.

I’ve given some discussion of the Standard Model at

my main page (which is now partly obsolete and in need of a major overhaul to include many developments). Woit gives a summary the Standard Model in a completely different way, which makes chiral symmetries clear, in Fig. 7.1 on page 93 of Not Even Wrong (my failure to understand this before made me very confused about chiral symmetry so I didn’t mention or consider it’s role):

‘The picture [it is copyright, so get the book: see Fig. 7.1 on p.93 of Not Even Wrong] shows the SU(3) x SU(2) x U(1) transformation properties of the first three generations of fermions in the standard model (the other two generations behave the same way).

‘Under SU(3), the quarks are triplets and the leptons are invariant.

‘Under SU(2), the particles in the middle row are doublets (and are left-handed Weyl-spinors under Lorentz transformations), the other particles are invariant (and are right-handed Weyl-spinors under Lorentz transformations).

‘Under U(1), the transformation properties of each particle is given by its weak hypercharge Y.’

But the key thing is that the hypercharge Y is different for differently handed quarks of the same type: a right-handed downquark (electric charge -1/3) has a weak hypercharge of -2/3, while a left-handed downquark (same electric charge as the right-handed one, -1/3), has a different weak hypercharge: 1/3 instead of -2/3!

Clearly this weak hypercharge effect is what has been missing from my naive causal model (where observed long range quark electric charge is determined merely by the strength of vacuum polarization shielding of the electric charges closely confined). Energy is not merely being shared between the QCD SU(3) colour forces and the U(1) electromagnetic forces, but there is the energy present in the form of weak hypercharge forces which are determined by the SU(2) weak nuclear force group.

Let’s get the facts straight: from Woit’s discussion (unless I’m misunderstanding), the strong QCD force SU(3) only applies to triads of quarks, not to pairs of quarks (mesons).

The binding of pairs of quarks is by the weak force only (which would explain why they are so unstable, they’re only weakly bound and so more easily decay than triads which are strongly bound). The weak force also has effects on triads of quarks.

The weak hypercharge of a downquark in a meson containing 2 quarks is Y=1/3 compared to Y=-2/3 for a downquark in a baryon containing 3 quarks.

Hence the causal relationship holds true for mesons. Hypothetically, 3 right-handed electrons (each with weak hypercharge Y = -2) will become right-handed downquarks (each with hypercharge Y=-2/3) bought close together, because they then share the same vacuum polarization shield, which is 3 times stronger than that around a single electron, and so attenuates more of the electric field, reducing it from -1 per electron when widely separated to -1/3 when brought close together (forget the Pauli exclusion principle, for a moment!).

Now, in a meson, you only have 2 quarks, so you might think that from this model the downquark would have electric charge -1/2 and not -1/3, but that anomaly only exists when ignoring the weak hypercharge! For a downquark in a meson, the weak hypercharge is Y=1/3 instead of Y=-2/3 which the downquark has in a baryon (triad). The increased hypercharge (which is responsible physically to the weak force field that binds up a meson) offsets the electric charge anomaly. The handedness switch-over, in going from considering quarks in baryons to those in mesons, automatically compensates the electric charge, keeping it the same!

The details of how handedness is linked to weak hypercharge is found in the dynamics of Pauli’s exclusion principle: adjacent particles can’t have have a full set of the same quantum numbers like the same spin and charge. Instead, each particle has a unique set of quantum numbers. Bringing particles together and having them ‘live together’ in close proximity forces them to arrange themselves with suitable quantum numbers. The Pauli exclusion principle is simple in the case of atomic electrons: each electron has four quantum numbers, describing orbit configuration and intrinsic spin, and each adjacent electron has opposite spin to its neighbours. The spin alignment here can be understood very simply in terms of magnetism: it needs the least energy to have sign an alignment (hving similar spins would be an addition of magnetic moments, so that north poles would all be adjacent and south poles would all be adjacent, which requires more energy input than having adjacent magnets parallel with opposite poles nearest). In quarks, the situation regarding the Pauli exclusion principle mechanism is slightly more complex, because quarks can have similar spins if their colour charges are different (electrons don’t have colour charges, which are an emergent property of the strong fields which arise when two or three real fundamental particles are confined at close quarters).

Obviously there is a lot more detail to be filled in, but the main guiding principles are clear now: every fermion is indeed the same basic entity (whether quark or lepton), and the differences in observed properties stem to the vacuum properties such as the strength of vacuum polarization, etc. The fractional charges of quarks always arise due to the use of some electromagnetic energy to create other types of short range forces (the testable prediction of this model is the forecast that detailed calculations will show that perfect unification will arise on such energy conservation principles, without requiring the 1:1 boson to fermion ’supersymmetry’ hitherto postulated by string theorists). Hence, in this simple mechanism, the +2/3 charge of the upquark is due to a combination of strong vacuum polarization attenuation and hypercharge (the downquark we have been discussing is just the clearest case).

So regarding unification, we can get hard numbers out of this simple mechanism. We can see that the total gauge boson energy for all fields is conserved, so when one type of charge (electric charge, colour charge, or weak hypercharge) varies with collision energy or distance from nucleus, we can predict that the others will vary in such a way that the total charge gauge boson energy (which mediates the charge) remains constant. For example, we see reduced electric charge from a long range because some of that energy is attenuated by the vacuum and is being used for weak and (in the case of triads of quarks) colour charge fields. So as you get to ever higher energies (smaller distances from particle core) you will see all the forces equalizing naturally because there is less and less polarized vacuum between you and the real particle core which can attenuate the electromagnetic field. Hence, the observable strong charge couplings have less supply of energy (which comes from attenuation of the electromagnetic field), and start to decline. This causes asymptotic freedom of quarks because the decline in the strong nuclear coupling at very small distances is offset by the geometric inverse-square law over a limited range (the range of asymptotic freedom). This is what allows hadrons to have a much bigger size than the size of the tiny quarks they contain.

We’re in a Dirac sea, which undergoes various phase transitions breaking symmetries as the strength of the field is increased. Near a real charge, the electromagnetic field within 10^{-15} metre exceeds 10^20 volts/metre which causes the first phase transition, like ice melting or water boiling. The freed Dirac sea particles can exert therefore a short range attractive force by the

LeSage mechanism (which of course does not apply directly to long range interactions because the ‘gas’ effect fills in LeSage shadows over long distances, so the attractive force is short-ranged: it is limited to a range of about one mean-free-path for the interacting particles in the Dirac sea). The LeSage gas mechanism represents the strong nuclear attractive force mechanism. Gravity and electromagnetism as explained the previous posts on this blog are both due to the Yang-Mills ‘photon’ exchange mechanism (because Yang-Mills exchange ‘photon’ radiation - or any other radiation - doesn’t diffract into shadows, it doesn’t suffer the short range issue of the strong nuclear force; the short range of the weak nuclear force due to shielding by the Dirac sea may be quite a different mechanism for having a short-range).

You can think of the strong force like the short-range forces due to normal sea-level air pressure: the air pressure of 14.7 psi or 101 kPa is big, so you can prove the short range attractive force of air pressure it by using a set of rubber ’suction cups’ strapped on your hands and knees to climb a smooth surface like a glass-fronted building (assuming the glass is strong enough!). This force has a range on the order of the mean free path of air molecules. At bigger distances, air pressure fills the gap, and the force disappears. The actual fall of course is statistical; instead of the short range attraction becoming suddenly zero at exactly one mean free path, it drops (in addition to geometric factors) exponentially by the factor exp{-ux} where u is the reciprocal of the mean free path and x is distance (in air of course there are weak attractive forces between molecules, Van der Waals forces, as well). Hence it is short ranged due to scatter of charged particles dispersing forces in all directions (unlike radiation):

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 book The Logic of Scientific Discovery]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’

(Note statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum is full of gauge bosons carrying momentum like light, which above the IR cutoff start to exert vast pair-production loop pressure; this gives the foam vacuum.)

Dirac sea polarization (leading to charge renormalization) is only possible in volumes large enough to be likely to contain some discrete charges! The IR cutoff has a different explanation. It is required physically in quantum field theory to limit the range over which the vacuum charges of the Dirac sea are polarized, because if there were no limit, then the Dirac sea would be able to polarize sufficiently to completely eradicate the entire electric field of all electric charges. That this does not happen in nature shows that there is a physical mechanism in place which prevents polarization below the range of the IR cutoff, which is about 10^{-15} m from an electron, corresponding to something like 10^{20} volts/metre electric field strength.

given energy in proportion to the field strength (by analogy to Einstein’s photoelectric equation, where there is a certain minimum amount of energy required to free electrons from their bound state, and further energy above that mimimum then then goes into increasing the kinetic energy of those particles, except that in this case the indeterminancy principle due to scattering indeterminism introduces statistics and makes it more like a quantum tunnelling effect and the extra field energy above the threshold can also energise ground state Dirac sea charges into more massive loops in progressive states, ie, 1.022 MeV delivered to two particles colliding with 0.511 MeV each - the IR cutoff - can create an e- and e+ pair, while a higher loop threshold will be 211.2 MeV delivered as two particles colliding with 105.6 MeV or more, which can create a

muon+ and muon- pair, and so on, see the previous post for explanation of a diagram explaining mass by ‘doubly special supersymmetry’ where charges have a discrete number of massive partners located either within the close-in UV cutoff range or beyond the perimeter IR cutoff range, accounting for masses in a predictive, checkable manner), and

the quantum field is then polarized (shielding electric field strength).

These three processes should not be confused, but are generally confused by the use of the vague term ‘energy’ to represent 1/distance in most discussions of quantum field theory. For two of the best introductions to quantum field theory as it is traditionally presented see

We only see ‘pair-production’ of Dirac sea charges becoming observable in creation-annihilation ‘loops’ (Feynman diagrams) when the electric field is in excess of about 10^{20} volts/metre. This very intense electric field, which occurs out to about 10^{-15} metres from a real (long-observable) electron charge core, is strong enough to overcome the binding energy of the Dirac sea: particle pairs then pop into visibility (rather like water boiling off at 100 C).

The spacing of the Dirac sea particles in the bound state below the IR cutoff is easily obtained. Take the energy-time form of Heisenberg’s uncertainty principle and put in the energy of an electron-positron pair and you find it can exist for ~10^{-21} second; the maximum possible range is therefore this time multiplied by c, ie ~10^{-12} metre.

The key thing to do would be to calculate the transmission of gamma rays in the vacuum. Since the maximum separation of charges is 10^{-12} m, the vacuum contains at least 10^{36} charges per cubic metre. If I can calculate that the range of gamma radiation in such a dense medium is 10^{-12} metre, I’ll have substantiated the mainstream picture. Normally you get two gamma rays when an electron and positron annhilate (the gamma rays go off in opposite directions), so the energy of each gamma ray is 0.511 MeV, and it is well known that the Compton effect (a scattering of gamma rays by electrons as if both are particles not waves) predominates for this energy. The mean free path for scatter of gamma ray energy by electrons and positrons depends essentially on the density of electrons (number of electrons and positrons per cubic metre of space). However, the data come from either the Klein-Nishita theory (an application of quantum mechanics to the Compton effect) or experiment, for situations where the binding energy of electrons to atoms or whatever is insignificant compared to the energy of the gamma ray. It is perfectly possible that the binding energy of the Dirac sea would mean that the usual radiation attenuation data are inapplicable!

Ignoring this possibility for a moment, we find that for 0.5 MeV gamma rays,

Glasstone and Dolan (page 356) state that the linear absorption coefficient of water is u = 0.097 (cm)^{-1}, where the attenuation is exponential as e^{-ux} where x is distance. Each water molecule has 8 electrons and we know from Avogadro’s number that 18 grams of water contains 6.0225 * 10^23 water molecules, or about 4.818 * 10^24 electrons. Hence, 1 cubic metre of water (1 metric ton or 1 million grams) contains 2.6767 * 10^29 electrons. The reciprocal of the linear absorption coefficient u, ie, 1/u tells us the ‘mean free path’ (the best estimate of effective ‘range’ for our purposes here), which for water exposed to 0.5 MeV gamma rays is 1/0.097 = 10.3 cm = 0.103 m. Hence, the number of electrons and positrons in the Dirac sea must be vastly larger that in water, in order to keep the range down (we don’t observe any vacuum gamma radioactivity, which only affects subatomic particles). Normalising the mean free path to 10^{-12} m to agree with the Heisenberg uncertainty principle, we find that the density of electrons and positrons in the vacuum would be: {the electron density in 1 cubic metre of water, 2.6767 * 10^29} * 0.103/[10^{-12}] = 2.76 * 10^40 electrons and positrons per cubic metre of Dirac sea. This agrees with the estimate previously given from the Heisenberg uncertainty principle that the vacuum contains at least 10^{36} charges per cubic metre. However, the binding energy of the Dirac sea is being ignored in this Compton effect shielding estimate. The true separation distance is smaller still, and the true density of electrons and positrons in the Dirac sea is still higher.

Obviously the graining of the Dirac sea must be much smaller than 10^{-12} m because we have already said that it exists down to the UV cutoff (very high energy, ie, very small distances of closest approach). The amount of ‘energy’ in the Dirac sea is astronomical if you calculate the rest mass equivalent, but you can similarly produce stupid numbers for the energy of the earth’s atmosphere: the mean energy of an air molecule is around 500 m/s, and since the atmosphere is composed mainly of air molecules (with a relatively small amount of water and dust), we can get a ridiculous energy density of the air by multiplying the mass of air by 0.5*(500^2) to obtain its kinetic energy. Thus, 1 kg of air (with all the molecules going at a mean speed of 500 m/s) has an energy of 125,000 Joules. But this is not useful energy because it can’t be extracted: it is totally disorganised. The Dirac sea ‘energy’ is similarly massive but useless.

General relativity

Introduction to the basic ideas (curvature and tensor will be dealt with further on)

Let’s go right through the derivation of the Einstein-Hilbert field equation in a non-obfuscating way. To start with, the classical analogue of general relativity’s field equation is Poisson’s equation

div.2E = 4*Pi*Rho*G

The square of the divergence of E is just the Laplacian operator (well known in heat diffusion) acting on E and implies for radial symmetry (r = x = y = z) of a field:

div.2E

= d2E/dx2 + d2E/dy2 + d2E/dz2

= 3*d2E/dr2

To derive Poisson’s equation in a simple way (not mathematically rigorous), observe that for non-relativistic situations

E = (1/2)mv2 = MG/r

(Kinetic energy gained by a test particle falling to distance r from mass M is simply the gravitational potential energy gained at that distance by the fall!)

Here, the ratio v/r = dv/dr when translating to a differential equation, and as already shown div.2E = 3*d2E/dr2 for radial symmetry, so

4*Pi*Rho*G = (3/2)m(dv/dr)2 = div.2E

Hence proof of Poisson’s gravity field equation:

div.2E = 4*Pi*Rho*G.

To get this expressed as tensors you begin with a Ricci tensor Ruv for curvature (this is a shortened Riemann tensor).

Ruv = 4*Pi*G*Tuv,

where Tuv is the energy-momentum tensor which includes potential energy contributions due to pressures, but is analogous to the density term Rho in Poisson's equation. (The density of mass can be converted into energy density simply by using E=mc2.)

However, this equation Ruv = 4*Pi*G*Tuv was found by Einstein to be a failure because the divergence of Tuv should be zero if energy is conserved. (A uniform energy density will have zero divergence, and Tuv is of course a density-type parameter. The energy potential of a gravitational field doesn't have zero divergence, because it diverges - falls off - with distance, but uniform density has zero divergence simply because it doesn't fall with distance!)

The only way Einstein could correct the equation (so that the divergence of Tuv is zero) was by replacing Tuv with Tuv - (1/2)(guv)T, where R is the trace of the Ricci tensor, and T is the trace of the energy-mass tensor.

Ruv = 4*Pi*G*[Tuv - (1/2)(guv)T]

which is equivalent to

Ruv - (1/2)Rguv = 8*Pi*G*Tuv

Which is the full general relativity field equation (ignoring the cosmological constant and dark energy, which is incompatible with any Yang-Mills quantum gravity because to use an over-simplified argument, the redshift of gravity-causing exchange radiation between receding masses over long ranges cuts off gravity, negating the need for dark energy to explain observations).

Curvature and tensors

General relativity, absolute causality

Professor Georg Riemann (1826-66) stated in his 10 June 1854 lecture at Gottingen University, On the hypotheses which lie at the foundations of geometry: ‘If the fixing of the location is referred to determinations of magnitudes, that is, if the location of a point in the n-dimensional manifold be expressed by n variable quantities x1, x2, x3, and so on to xn, then … ds =

Ö [å (dx)2] … I will therefore term flat these manifolds in which the square of the line-element can be reduced to the sum of the squares … A decision upon these questions can be found only by starting from the structure of phenomena that has been approved in experience hitherto, for which Newton laid the foundation, and by modifying this structure gradually under the compulsion of facts which it cannot explain.’

å (dx2) could obviously include time (if we live in a single velocity universe) because the product of velocity, c, and time, t, is a distance, so an additional term d(ct)2 can be included with the other dimensions dx2, dy2, and dz2. There is then the question as to whether the term d(ct)2 will be added or subtracted from the other dimensions. It is clearly negative, because it is, in the absence of acceleration, a simple resultant, i.e., dx2 + dy2 + dz2 = d(ct)2, which implies that d(ct)2 changes sign when passed across the equality sign to the other dimensions: ds2 = å (dx2) = dx2 + dy2 + dz2 – d(ct)2 = 0 (for the absence of acceleration, therefore ignoring gravity, and also ignoring the contraction/time-dilation in inertial motion).

This formula, ds2 =

å (dx2) = dx2 + dy2 + dz2 – d(ct)2, is known as the ‘Riemann metric’ of Minkowski spacetime. It is important to note that it is not the correct spacetime metric, which is precisely why Riemann did not discover general relativity back in 1854. [The algebraic Newtonian-equivalent (for weak fields) approximation in general relativity is the Schwarzschild metric, which, ds2 = (1 – 2GM/r)-1(dx2 + dy2 + dz2 ) – (1 – 2GM/r) d(ct)2. This reduces to the special relativity metric for the case M = 0, i.e., the absence of gravitation. However this does not imply that general relativity proves the postulates of special relativity. For example, in general relativity the velocity of light changes as gravity deflects light, but special relativity denies this. Because the deflection in light, and hence velocity change, is an experimentally validated prediction of general relativity, that postulate in special relativity is inconsistent and in error. For this reason, it is misleading to begin teaching physics using special relativity.]

Professor Gregorio Ricci-Curbastro (1853-1925) took up Riemann’s suggestion and wrote a 23-pages long article in 1892 on ‘absolute differential calculus’, developed to express differentials in such a way that they remain invariant after a change of co-ordinate system. In 1901, Ricci and Tullio Levi-Civita (1873-1941) wrote a 77-pages long paper on this, Methods of the Absolute Differential Calculus and Their Applications, which showed how to represent equations invariantly of any absolute co-ordinate system. This relied upon summations of matrices of differential vectors. Ricci expanded Riemann’s system of notation to allow the Pythagorean dimensions of space to be defined by a line element or ‘Riemann metric’ (named the ‘metric tensor’ by Einstein in 1916):

g = ds2 = gmn dx-m dx-n.

The meaning of such a tensor is revealed by subscript notation, which identify the rank of tensor and its type of variance.

‘The special theory of relativity … does not extend to non-uniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant). … We call four quantities Av the components of a covariant four-vector, if for any arbitrary choice of the contravariant four-vector Bv, the sum over v, å Av Bv = Invariant. The law of transformation of a covariant four-vector follows from this definition.’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916.

When you look at the mechanism for the physical contraction, you see that general relativity is consistent with FitzGerald's physical contraction, and I've shown this mathematically at my home page. Special relativity according even to Albert Einstein is superseded by general relativity, a fact that Lubos Motl may never grasp, he like other ‘string theorists’ calls everyone interested in Feynman’s objective approach to science a ‘science-hater’. To a string theorist, a lack of connection to physical fact is ‘science-loving’ while a healthy interest in supporting empirically checked work is ‘science-hating’. (String theorists borrowed this idea from KGB propaganda as explained by George Orwell as ‘doublethink’ in the novel 1984.) Because string theory agrees with special relativity, crackpots claim falsely that general relativity is based on the same basic principle of special relativity that is a lie because special relativity is distinct from general covariance that is the heart of general relativity:

‘... the law of the constancy of the velocity of light. But ... the general theory of relativity cannot retain this law. On the contrary, we arrived at the result according to this latter theory, the velocity of light must always depend on the coordinates when a gravitational field is present.’ - Albert Einstein, Relativity, The Special and General Theory, Henry Holt and Co., 1920, p111.

‘... the principle of the constancy of the velocity of light in vacuo must be modified, since we easily recognise that the path of a ray of light … must in general be curvilinear...’ - Albert Einstein, The Principle of Relativity, Dover, 1923, p114.

‘The special theory of relativity ... does not extend to non-uniform motion ... The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity... The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant). ...’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916.

‘According to the general theory of relativity space without ether is unthinkable.’ – Albert Einstein, Sidelights on Relativity, Dover, New York, 1952, p23.

‘The Michelson-Morley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus…. The great stumbing-block for a philosophy which denies absolute space is the experimental detection of absolute rotation.’ – Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919), Space Time and Gravitation: An Outline of the General Relativity Theory, Cambridge University Press, Cambridge, 1921, pp. 20, 152.

The rank is denoted simply by the number of letters of subscript notation, so that Xais a ‘rank 1’ tensor (a vector sum of first-order differentials, like net velocity or gradient over applicable dimensions), and Xab is a ‘rank 2’ tensor (for second order differential vectors, like acceleration). A ‘rank 0’ tensor would be a scalar (a simple quantity without direction, such as the number of particles you are dealing with). A rank 0 tensor is defined by a single number (scalar), a rank 1 tensor is a vector which is described by four numbers representing components in three orthagonal directions and time, a rank 2 tensor is described by 4 x 4 = 16 numbers, which can be tabulated in a matrix. By definition, a covariant tensor (say, Xa) and a contra-variant tensor of the same variable (say, X-a) are distinguished by the way they transform when converting from one system of co-ordinates to another; a vector being defined as a rank 1 covariant tensor. Ricci used lower indices (subscript) to denote the matrix expansion of covariant tensors, and denoted a contra-variant tensor by superscript (for example xn). But even when bold print is used, this is still ambiguous with power notation, which of course means something completely different (the tensor xn = x1 + x2 + x3 +... xn, whereas for powers or indices xn = x1 x2 x3 ...xn). [Another step towards ‘beautiful’ gibberish then occurs whenever a contra-variant tensor is raised to a power, resulting in, say (x2)2, which a logical mortal (who’s eyes do not catch the bold superscript) immediately ‘sees’ as x4,causing confusion.] We avoid the ‘beautiful’ notation by using negative subscript to represent contra-variant notation, thus x-n is here the contra-variant version of the covariant tensor xn.

Einstein wrote in his original paper on the subject, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916: ‘Following Ricci and Levi-Civita, we denote the contravariant character by placing the index above, and the covariant by placing it below.’ This was fine for Einstein who had by that time been working with the theory of Ricci and Levi-Civita for five years, but does not have the clarity it could have. (A student who is used to indices from normal algebra finds the use of index notation for contravariant tensors absurd, and it is sensible to be as unambiguous as possible.) If we expand the metric tensor for m andn able to take values representing the four components of space-time (1, 2, 3 and 4 representing the ct, x, y, and z dimensions) we get the awfully long summation of the 16 terms added up like a 4-by-4 matrix (notice that according to Einstein’s summation convention, tensors with indices which appear twice are to be summed over):

The first dimension has to be defined as negative since it represents the time component, ct. We can however simplify this result by collecting similar terms together and introducing the defined dimensions in terms of number notation, since the term dx-1 dx-1 = d(ct)2, while dx-2 dx-2 = dx2, dx-3 dx-3 = dy2, and so on. Therefore:

It is often asserted that Albert Einstein (1879-1955) was slow to apply tensors to relativity, resulting in the 10 years long delay between special relativity (1905) and general relativity (1915). In fact, you could more justly blame Ricci and Levi-Civita who wrote the long-winded paper about the invention of tensors (hyped under the name ‘absolute differential calculus’ at that time) and their applications to physical laws to make them invariant of absolute co-ordinate systems. If Ricci and Levi-Civita had been competent geniuses in mathematical physics in 1901, why did they not discover general relativity, instead of merely putting into print some new mathematical tools? Radical innovations on a frontier are difficult enough to impose on the world for psychological reasons, without this being done in a radical manner. So it is rare for a single group of people to have the stamina to both invent a new method, and to apply it successfully to a radically new problem. Sir Isaac Newton used geometry, not his invention of calculus, to describe gravity in his Principia, because an innovation expressed using new methods makes it too difficult for readers to grasp. It is necessary to use familiar language and terminology to explain radical ideas rapidly and successfully.

Professor Morris Kline describes the situation after 1911, when Einstein began to search for more sophisticated mathematics to build gravitation into space-time geometry:

‘Up to this time Einstein had used only the simplest mathematical tools and had even been suspicious of the need for "higher mathematics", which he thought was often introduced to dumbfound the reader. However, to make progress on his problem he discussed it in Prague with a colleague, the mathematician Georg Pick, who called his attention to the mathematical theory of Ricci and Levi-Civita. In Zurich Einstein found a friend, Marcel Grossmann (1878-1936), who helped him learn the theory; and with this as a basis, he succeeded in formulating the general theory of relativity.’ (M. Kline, Mathematical Thought from Ancient to Modern Times, Oxford University Press, 1990, vol. 3, p. 1131.)

Let us examine the developments Einstein introduced to accomplish general relativity, which aims to equate the mass-energy in space to the curvature of motion (acceleration) of an small test mass, called the geodesic path. Readers who want a good account of the full standard tensor manipulation should see the page by Dr John Baez or a good book by Sean Carroll, Spacetime and Geometry: An Introduction to General Relativity.

NEW MATERIAL INSERTED 8 JUNE 2007:

Curvature is best illustrated by plotting a graph of distance versus time and when the line curves (as for an accelerating car) that curve is ‘curvature’. It’s the curved line on a space-time graph that marks acceleration, be that acceleration due to a force acting upon gravitational mass or inertial mass (the equivalence principle of general relativity means that gravitational mass = inertial mass).

The point above is made clear by Professor Lee Smolin on page 42 of the USA edition of his 1996 book, ‘The Trouble with Physics.’

Next, in order to mathematically understand the Riemann curvature tensor, you need to understand the operator (not a tensor) which is denoted by the Christoffel symbol:

Gabc = (1/2)gcd [(dgda/dxb) + (dgdb/dxa) + (dgab/dxd)]

The Riemann curvature tensor is then represented by:

Racbe = ( dGbca /dxe) – ( dGbea /dxc) + (GteaGbct) – (GtbaGcet).

If there is no curvature, spacetime is flat and things don’t accelerate. Notice that if there is any (fictional) ‘cosmological constant’ (a repulsive force between all masses, opposing gravity an increasing with the distance between the masses), it will only cancel out curvature at a particular distance, where gravity is cancelled out (within this distance there is curvature due to gravitation and at greater distances there will be curvature due to the dark energy that is responsible for the cosmological constant). The only way to have a completely flat spacetime is to have totally empty space, which of course doesn’t exist, in the universe we actually know.

The Ricci tensor is a Riemann tensor contracted in form by summing over a = b, so it is simpler than the Riemann tensor and is composed of 10 second-order differentials. General relativity deals with a change of co-ordinates by using Fitzgerald-Lorentz contraction factor, g = (1 – v2/c2)1/2. For understanding the physics, the Ricci tensor generally depends on g in the manner: Rmn = c2(dg /dx-m )(dg /dx-n ). Then the trace R = c2d2 g/ds2. In each case the resulting dimensions are (acceleration/distance) = (time)-2, assuming we can treat the tensors as real numbers (which, as Heaviside showed, is often possible for operators).

Karl Schwarzschild produced a simple solution to the Einstein field equation in 1916 which shows the effect of gravity on spacetime, which reduces to the line element of special relativity for the impossible hypothetical case of zero mass.

Einstein at first built a representation of Isaac Newton’s gravity law a = MG/r2 (inward acceleration being defined as positive) in the form Rmn = 4pGTmn /c2, where Tmn is the mass-energy tensor, Tmn = rumun . If we consider just a single dimension for low velocities (g = 1), and remember E = mc2, then Tmn = T00 = ru2 = r(g c)2 = E/(volume). Thus, Tmn /c2 is the effective density of matter in space (the mass equivalent of the energy of electromagnetic fields). We ignore pressure, momentum, etc., here:

To get solutions, the source of gravity such as the energy of electromagnetic field, can in general relativity be treated as a 'perfect fluid' with no drag properties. Since the gravity source is conveyed by an intervening medium (the spacetime fabric, which we show to be dynamical Yang-Mills exchange radiation based), this medium when considered as an electromagnetic field, causes gravity by behaving as a perfect fluid.

According to most statements of Newton’s second law and universal gravitation law, F = ma = mMG/r2, but a serious flaw here is that F = ma is not an accurate statement because during acceleration the mass m varies with the speed (mass increases dramatically at relativistic velocities, i.e., velocities approaching c). A more accurate version of Newton's second law is therefore his original formulation, F = dp/dt where p is momentum (for low velocities only, p

» mv). Even for the low velocity case where p » mv, this law expands by the product law in calculus to F = dp/dt » d(mv)/dt = (m.dv/dt) + (v.dm/dt). For the situation where m is a variable (relativistic velocities), the gravity law will therefore be complicated than Newton's universal gravitational law (F = mMG/r2). The Poisson equation for the Newtonian potential is Ñ2F = 4p rG, where r is density. The Laplacian operator Ñ2 signifies the sum of second-order differentials of F; because there are three terms they add up (in spherical symmetry) to give 3a/r, where a is the gravitational acceleration along radius r. To convert Ñ2F = 4p rG into the Einstein field equation requires replacing the mass density r by the energy-momentum tensor Tmn , so that field energy and pressure energy are included along with the energy equivalent of the mass density, and also replacing Ñ2F by rank-2 tensor.

Einstein’s method of obtaining the final answer involved trial and error and the equivalence principle between inertial and gravitational mass, but using Professor Roger Penrose’s approach, Einstein recognised that while this equation reduces to Newton’s law for low speeds, it is in error because it violates the principle of conservation of mass-energy, since a gravitational field has energy (i.e., ‘potential energy’) and vice-versa.

The average angle of the propagation of ray of light from the line to the centre of gravity of the sun during deflection is a right angle. When gravity deflects an object with rest mass that is moving perpendicularly to the gravitational field lines, it speeds up the object as well as deflecting its direction. But because light is already travelling at its maximum speed (light speed), it simply cannot be speeded up at all by falling. Therefore, that half of the gravitational potential energy that normally goes into speeding up an object with rest mass cannot do so in the case of light, and must go instead into causing additional directional change (downward acceleration). This is the mathematical physics reasoning for why light is deflected by precisely twice the amount suggested by Newton’s a = MG/r2.

General relativity is an energy accountancy package, but you need physical intuition to use it. This reason is more of an accounting trick than a classical explanation. As Penrose points out, Newton’s law as expressed in tensor form with E=m c2 is fairly similar to Einstein’s field equation: R

mn = 4pGTmn /c2. Einstein’s result is: –½gmn R + Rmv = 8pGTmn /c2. The fundamental difference is due to the inclusion of the contraction term, –½gmn R, which doubles the value of the other side of the equality.

In an article by Penrose in the book It Must Be Beautiful Penrose explains the tensors of general relativity physically:

‘… when there is matter present in the vicinity of the deviating geodesics, the volume reduction is proportional to the total mass that is surrounded by the geodesics. This volume reduction is an average of the geodesic deviation in all directions … Thus, we need an appropriate entity that measures such curvature averages. Indeed, there is such an entity, referred to as the Ricci tensor, constructed from [the big Riemann tensor] R_abcd. Its collection of components is usually written R_ab. There is also an overall average single quantity R, referred to as the scalar curvature.’

Einstein’s field equation states that the Ricci tensor, minus half the product of the metric tensor and the scalar curvature, is equal to 8

pGTmn /c2, where Tmn is the mass-energy tensor which is basically the energy per unit volume (this is not so simple when you include relativistic effects and pressures). The key physical insight is the volume reduction, which can only be mechanistically explained as a result of the pressure of the spacetime fabric.

To solve the field equation, use is made of the simple concepts of proper lengths and proper times. The proper length in spacetime is equal to c

ò (- gmn dx-m dx-n )1/2, while the proper time is ò (gmn dx-m dx-n )1/2. Notice that the ratio of proper length to proper time is always c.

Now, –½g

mn R + Rmv = 8pGTmn /c2, is usually shortened to the vague and therefore unscientific and meaningless ‘Einstein equation,’ G = 8pT. Teachers who claim that the ‘conciseness’ and ‘beautiful simplicity’ of ‘G = 8pT’ is a ‘hallmark of brilliance’ are therefore obfuscating. A year later, in his paper ‘Cosmological Considerations on the General Theory of Relativity’, Einstein force-fitted it to the assumed static universe of 1916 by inventing a new cosmic ‘epicycle,’ the cosmological constant, to make gravity weaken faster than the inverse square law, become zero at a distance equal to the average separation distance of galaxies, and to become repulsive at greater distances. In fact, as later proved, such an epicycle, apart from being merely wild speculation lacking a causal mechanism, would be unstable and collapse into one lump. Einstein finally admitted that it was ‘the biggest blunder’ of his life.

There is a whole industry devoted to ‘G = 8

pT’ which is stated as meaning ‘curvature of space = mass-energy’ in an attempt to try to obfuscate so as to cover up the fact that Einstein had no mechanism of gravitation. In fact of course, Einstein admitted in 1920 in his inaugural lecture at Leyden that the deep meaning of general relativity is that in order to account for acceleration you need to dump the baggage associated with special relativity, and go back to having what he called an ‘ether’, or a continuum/fabric of spacetime. Something which doesn’t exist can hardly be curved, can it, eh?

The Ricci tensor is in fact a shortened form of a big Riemann rank 4 tensor (the expansions and properties of which are capable of putting anyone off science). To be precise, R

mv = Rmavbg-a-b , while R = Rmvg-m-v . No matter how many times people ‘hype’ up gibberish with propaganda labels such as ‘beautifully simplicity,’ Einstein lacked a mechanism of gravity and fails to fit the big bang universe without force-fitting it using ad hoc ‘epicycles’. The original epicycle was the ‘cosmological constant’, L . This falsely was used to keep the universe stable: G + Lgmn= 8pT. This sort of thing is, while admitted in 1929 to be an error by Einstein, still being postulated today, without any physical reasoning and with just ad hoc mathematical fiddling to justify it, to ‘explain’ why distant supernovae are not being slowed down by gravitation in the big bang. I predicted there was a small positive cosmological constant epicycle in 1996 (hence the value of the dark energy) by showing that there is no long range gravitational retardation of distant receding matter because that is a prediction of the gravity mechanism on this page, published via the October 1996 issue of Electronics World (letters page). Hence ‘dark energy’ is speculated as an invisible, unobserved epicycle to maintain ignorance. There is no ‘dark energy’ but you can calculate and predict the amount there would be from the fact the expansion of the universe isn’t slowing down: just accept the expansion goes as Hubble’s law with no gravitational retardation and when you normalise this with the mainstream cosmological model (which falsely assumes retardation) you ‘predict’ the ‘right’ values for a fictitious cosmological constant the fictitious dark energy.

Light has momentum and exerts pressure, delivering energy. Continuous exchange of high-energy gauge bosons can only be detected as the normal forces and inertia they produce.

Penrose’s Perimeter Institute lecture is interesting: ‘Are We Due for a New Revolution in Fundamental Physics?’ Penrose suggests quantum gravity will come from modifying quantum field theory to make it compatible with general relativity…I like the questions at the end where Penrose is asked about the ‘funnel’ spatial pictures of blackholes, and points out they’re misleading illustrations, since you’re really dealing with spacetime not a hole or distortion in 2 dimensions. The funnel picture really shows a 2-d surface distorted into 3 dimensions, where in reality you have a 3-dimensional surface distorted into 4 dimensional spacetime. In his essay on general relativity in the book ‘It Must Be Beautiful’, Penrose writes: ‘… when there is matter present in the vicinity of the deviating geodesics, the volume reduction is proportional to the total mass that is surrounded by the geodesics. This volume reduction is an average of the geodesic deviation in all directions … Thus, we need an appropriate entity that measures such curvature averages. Indeed, there is such an entity, referred to as the Ricci tensor …’ Feynman discussed this simply as a reduction in radial distance around a mass of (1/3)MG/c2 = 1.5 mm for Earth. It’s such a shame that the physical basics of general relativity are not taught, and the whole thing gets abstruse. The curved space or 4-d spacetime description is needed to avoid Pi varying due to gravitational contraction of radial distances but not circumferences.

The velocity needed to escape from the gravitational field of a mass (ignoring atmospheric drag), beginning at distance x from the centre of mass, by Newton’s law will be v = (2GM/x)1/2, so v2 = 2GM/x. The situation is symmetrical; ignoring atmospheric drag, the speed that a ball falls back and hits you is equal to the speed with which you threw it upwards (the conservation of energy). Therefore, the energy of mass in a gravitational field at radius x from the centre of mass is equivalent to the energy of an object falling there from an infinite distance, which by symmetry is equal to the energy of a mass travelling with escape velocity v.

By Einstein’s principle of equivalence between inertial and gravitational mass, this gravitational acceleration field produces an identical effect to ordinary motion. Therefore, we can place the square of escape velocity (v2 = 2GM/x) into the Fitzgerald-Lorentz contraction, giving

g = (1 – v2/c2)1/2 = [1 – 2GM/(xc2)]1/2.

However, there is an important difference between this gravitational transformation and the usual Fitzgerald-Lorentz transformation, since length is only contracted in one dimension with velocity, whereas length is contracted equally in 3 dimensions (in other words, radially outward in 3 dimensions, not sideways between radial lines!), with spherically symmetric gravity. Using the binomial expansion to the first two terms of each:

Fitzgerald-Lorentz contraction effect:

g = x/x0 = t/t0 = m0/m = (1 – v2/c2)1/2 = 1 – ½v2/c2 + ...

Gravitational contraction effect:

g = x/x0 = t/t0 = m0/m = [1 – 2GM/(xc2)]1/2 = 1 – GM/(xc2) + ...,

where for spherical symmetry ( x = y = z = r), we have the contraction spread over three perpendicular dimensions not just one as is the case for the FitzGerald-Lorentz contraction: x/x0 + y/y0 + z/z0 = 3r/r0. Hence the radial contraction of space around a mass is r/r0 = 1 – GM/(xc2) = 1 – GM/[(3rc2]

Therefore, clocks slow down not only when moving at high velocity, but also in gravitational fields, and distance contracts in all directions toward the centre of a static mass. The variation in mass with location within a gravitational field shown in the equation above is due to variations in gravitational potential energy. The contraction of space is by (1/3) GM/c2. This physically relates the Schwarzschild solution of general relativity to the special relativity line element of spacetime.

This is the 1.5-mm contraction of earth’s radius Feynman obtains, as if there is pressure in space. An equivalent pressure effect causes the Lorentz-FitzGerald contraction of objects in the direction of their motion in space, similar to the wind pressure when moving in air, but without viscosity. Feynman was unable to proceed with the LeSage gravity and gave up on it in 1965.

The gravity force is the shielded inward reaction (by Newton’s 3rd law the outward force has an equal and opposite reaction):

The cross-sectional area of shield projected to radius R is equal to the area of the fundamental particle (Pi multiplied by the square of the radius of the black hole of similar mass), multiplied by the ratio (R/r)2 which is the inverse-square law for the geometry of the implosion. This (R/r)2 ratio is very big for a falling apple! Because R is a fixed distance, as far as we are concerned here, the most significant variable the 1/r² factor, which we all know is the Newtonian inverse square law of gravity.

Illustration above: exchange force (gauge boson) radiation force cancels out (although there is compression equal to the contraction predicted by general relativity) in symmetrical situations outside the cone area since the net force sideways is the same in each direction unless there is a shielding mass intervening. Shielding is caused simply by the fact that nearby matter is not significantly receding, whereas distant matter is receding. Gravity is the net force introduced where a mass shadows you, namely in the double-cone areas shown above. In all other directions the symmetry cancels out and produces no net force. Hence gravity can be quantitatively predicted using only well established facts of quantum field theory, recession, etc. In the illustration above, only a ‘core’ of a fundamental particle (the shielding cross-section associated with the ‘Higgs-boson’ type mass-contributors in the standard model) does the shielding; the rest of the particle with its classical electron radius is generally much bigger but it doesn’t all contribute to the actual mass of the electron!

Gravity is not due to a surface compression but instead is mediated through the void between fundamental particles in atoms by exchange radiation which does not recognise macroscopic surfaces, but only interacts with the subnuclear particles associated with the elementary units of mass. The radial contraction of the earth's radius by gravity, as predicted by general relativity, is 1.5 mm. [This contraction of distance hasn't been measured directly, but the corresponding contraction or rather ‘dilation’ of time has been accurately measured by atomic clocks which have been carried to various altitudes (where gravity is weaker) in aircraft. Spacetime tells us that where distance is contracted, so is time.]

This contraction is not caused by a material pressure carried through the atoms of the earth, but is instead due to the gravity-causing exchange radiation of gravity which is carried through the void (nearly 100% of atomic volume is void). Hence the contraction is independent of the chemical nature of the earth. (Similarly, the contraction of moving bodies is caused by the same exchange radiation effect, and so is independent of the material's composition.)

The effective shielding radius of a black hole of mass M is equal to 2GM/c2. A shield, like the planet earth, is composed of very small, sub-atomic particles. The very small shielding area per particle means that there will be an insignificant chance of the fundamental particles within the earth ‘overlapping’ one another by being directly behind each other.

The total shield area is therefore directly proportional to the total mass: the total shield area is equal to the area of shielding by 1 fundamental particle, multiplied by the total number of particles. (Newton showed that a spherically symmetrical arrangement of masses, say in the earth, by the inverse-square gravity law is similar to the gravity from the same mass located at the centre, because the mass within a shell depends on its area and the square of its radius.) The earth’s mass in the standard model is due to particles associated with up and down quarks: the Higgs field.

A local mass shields the force-carrying radiation exchange, because the distant masses in the universe have high speed recession, but the nearby mass is not receding significantly. By Newton’s 2nd law the outward force (according of a nearby mass which is not receding (in spacetime) from you is F = ma = m.dv/dt = mv/(x/c) = mcv/x = 0. Hence, by Newton’s 3rd law, the inward force of gauge bosons coming towards you from that mass is also zero; there is no action and so there is no reaction. As a result, the local mass shields you, so you get pushed towards it. This is why apples fall.

Shielding: since most of the mass of atoms is associated with the fields of gluons and virtual particles surrounding quarks, these are the gravity-affected parts of atoms, not the electrons or quarks themselves.

The mass of a nucleon is typically 938 MeV, compared to just 0.511 MeV for an electron and 3-5 MeV for one of the three quarks inside a neutron or a proton. Hence the actual charges of matter aren't associated with much of the mass of material. Almost all the mass comes from the massive mediators of the strong force fields between quarks in nucleons, and between nucleons in nuclei heavier than hydrogen. (In the well-tested and empirically validated Standard Model, charges like fermions don't have mass at all; the entire mass is provided by a vacuum 'Higgs field'. The exact nature of the such a field is not predicted, although some constraints on its range of properties are evident.)

The radiation is received by mass almost equally from all directions, coming from other masses in the universe; the radiation is in effect reflected back the way it came if there is symmetry that prevents the mass from being moved. The result is then a mere compression of the mass by the amount mathematically predicted by general relativity, i.e., the radial contraction is by the small distance MG/(3c²) = 1.5 mm for the contraction of the spacetime fabric by the mass in the Earth. Plotting the earth and the observable distant receding matter average distance circles (not to scale) the geometry of the mechanism becomes clear:

The electron has the characteristics of a gravity field

trapped energy current, a Heaviside energy current loop of black hole size (radius 2GM/c^2) for its mass, as shown by gravity mechanism considerations (see ‘about’ information on right hand side of this blog for links). The looping of energy current, basically a Poynting-Heaviside energy current trapped in a small loop, causes a spherically symmetric E-field and a toroidal shaped B-field which at great distances reduces (because of the effect of the close-in radial electric fields on transverse B-fields in the vacuum polarization zone within 10^{-15} metre of the electron black hole core) to a simple magnetic dipole field (those B-field lines which are parallel to E-field lines, ie, the polar B-field lines of the toroid, obviously can’t ever be attenuated by the radial E-field). This means that since the E- and B-fields in a photon are related by simply E = c*B, the vacuum polarization reduces only E by a factor of 137, and not B! This is long evidenced in practice as Dirac proved in 1931:

‘When one considers Maxwell’s equations for just the electromagnetic field, ignoring electrically charged particles, one finds that the equations have some peculiar extra symmetries besides the well-known gauge symmetry and space-time symmetries. The extra symmetry comes about because one can interchange the roles of the electric and magnetic fields in the equations without changing their form. The electric and magnetic fields in the equations are said to be dual to each other, and this symmetry is called a duality symmetry. Once electric charges are put back in to get the full theory of electrodynamics, the duality symmetry is ruined. In 1931 Dirac realised that to recover the duality in the full theory, one needs to introduce magnetically charged particles with peculiar properties. These are called magnetic monopoles and can be thought of as topologically non-trivial configurations of the electromagnetic field, in which the electromagnetic field becomes infinitely large at a point. Whereas electric charges are weakly coupled to the electromagnetic field with a coupling strength given by the fine structure constant alpha = 1/137, the duality symmetry inverts this number, demanding that the coupling of the magnetic charge to the electromagnetic field be strong with strength 1/alpha = 137. [This applies to the magnetic dipole Dirac calculated for the electron, assuming it to be a Poynting wave where E = c*B and E is shielded by vacuum polarization by a factor of 1/alpha = 137.]

‘If magnetic monopoles exist, this strong [magnetic] coupling to the electromagnetic field would make them easy to detect. All experiments that have looked for them have turned up nothing…’ - P. Woit, Not Even Wrong, Jonathan Cape, London, 2006, pp. 138-9. [Emphasis added.]

The Pauli exclusion principle normally makes the magnetic moments of all electrons undetectable on a macroscopic scale (apart from magnets made from iron, etc.): the magnetic moments usually cancel out because adjacent electrons always pair with opposite spins! If there are magnetic monopoles in the Dirac sea, there will be as many ‘north polar’ monopoles as ’south polar’ monopoles around, so we can expect not to see them because they are so strongly bound!

‘Here at Padua is the principal professor of philosophy whom I have repeatedly and urgently requested to look at the moon and planets through my glass which he pertinaciously refuses to do. Why are you not here? What shouts of laughter we should have at this glorious folly! And to hear the professor of philosophy at Pisa labouring before the Grand Duke with logical arguments, as if with magical incantations, to charm the new planets out of the sky.’ - Letter of Galileo to Kepler, 1610,

‘There will certainly be no lack of human pioneers when we have mastered the art of flight. Who would have thought that navigation across the vast ocean is less dangerous and quieter than in the narrow, threatening gulfs of the Adriatic , or the Baltic, or the British straits? Let us create vessels and sails adjusted to the heavenly ether, and there will be plenty of people unafraid of the empty wastes. In the meantime, we shall prepare, for the brave sky travelers, maps of the celestial bodies - I shall do it for the moon, you, Galileo, for Jupiter.’ - Letter from Johannes Kepler to Galileo Galilei, April 1610,

Kepler was a crackpot/noise maker; despite his laws and discovery of elliptical orbits, he got the biggest problem wrong, believing that the earth - which

William Gilbert had discovered to be a giant magnet - was kept in orbit around the sun by magnetic force. So he was a noise generator, a crackpot. If you drop a bag of nails, they don’t all align to the earth’s magnetism because it is so weak, but they do all fall - because gravity is relatively strong due to the immense amounts of mass involved. (For unit charges, electromagnetism is stronger than gravity by a factor like 10^{40} but that is not the right comparison here, since the majority of the magnetism in the earth due to fundamental charges is cancelled out by the fact that charges are paired with opposite spins, cancelling out their magnetism. The tiny magnetic field of the planet earth is caused by some kind of weak dynamo mechanism due to the earth’s rotation and the liquid nickel-iron core of the earth, and the earth’s magnetism periodically flips and reverses naturally - it is weak!) So just because a person gets one thing right, or one thing wrong, or even not even wrong, that doesn’t mean that all their ideas are good/rubbish.

As Arthur Koestler pointed out in The Sleepwalkers, it is entirely possible for there to be revolutions without any really fanatic or even objective/rational proponents (Newton

‘As long as the leadership of the particle theory community refuses to face up to what has happened and continues to train young theorists to work on a failed project, there is little likelihood of new ideas finding fertile ground in which to grow. Without a dramatic change in the way theorists choose what topics to address, they will continue to be as unproductive as they have been for two decades, waiting for some new experimental result finally to arrive.’

John Horgan’s 1996 excellent book The End of Science, which Woit argues is the future of physics if people don’t keep to explaining what is known (rather than speculating about unification at energy higher than can ever be seen, speculating about parallel universes, extradimensions, and other non-empirical drivel), states:

‘A few diehards dedicated to truth rather than practicality will practice physics in a nonempirical, ironic mode, plumbing the magical realm of superstrings and other esoterica and fretting about the meaning of quantum mechanics. The conferences of these ironic physicists, whose disputes cannot be experimentally resolved, will become more and more like those of that bastion of literary criticism, the Modern Language Association.’

‘… controversy is easily defused by a good experiment. When such unpleasantness is encountered, both warring factions should seek a resolution in terms of definitive experiments, rather than continued personal mudslinging. This is the difference beween scientific subjects, such as engineering, and non-scientific subjects such as art. Nobody will ever be able to devise an uglyometer to quantify the artistic merits of a painting, for example.’ (If string theorists did this, string theory would be dead, because my mechanism published in Oct 96 E.W. and Feb. 97 Science World, predicts the current cosmological results which were discovered about two years later by Perlmutter.)

‘The ability to change one’s mind when confronted with new evidence is called the scientific mindset. People who will not change their minds when confronted with new evidence are called fundamentalists.’ -

This comment from Dr Love is extremely depressing; we all know today’s physics is a religion. I found out after emailed exchanges with, I believe, Dr John Gribbin, the author of numerous crackpot books like ‘The Jupiter Effect’ (claiming Los Angeles would be destroyed by an earthquake in 1982), and quantum books trying to prove Lennon’s claim ‘nothing is real’. After explaining the facts to Gribbin, he then emailed me a question something like (I have archives of emails by the way, so could check the exact wording if required): ‘you don’t seriously expect me to believe that or write about it?’

‘… a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.’ -

But, being anti-belief and anti-religious intrusion into science, I’m not interested in getting people to believe truths but on the contrary, to question them. Science is about confronting facts. Dr Love suggests a

U(3,2)/U(3,1)xU(1) alternative to the Standard Model, which provides a test on my objectivity. I can’t understand his model properly because it reproduces particle properties in a way I don’t understand, and doesn’t appear to yield any of the numbers I want like force strengths, particle masses, causal explanations. Although he has a great many causal explanations in his paper, which are highly valuable, I don’t see how they connect to the alternative to the standard model. He has an online paper on the subject as a PDF file, ‘Elementary Particles as Oscillations in Anti-de-Sitter Space-Time’ which I have several issues with: (1) anti-de-Sitter spacetime is a stringy assumption to begin with (in the sense for example, that it has a negative cosmological constant, which nobody has ever observed, just as extra dimensions and fairies aren’t observed), (2) I don’t see checkable predictions. However, maybe further work on such ideas will produce more justification for them; they haven’t had the concentration of effort which string theory has had.

If you actually check what Feynman said in the "Feynman Lectures on Gravitation", page 30…you will see that the (so far undetected) graviton, does not, a priori, have to be spin 2, and in fact, spin 2 may not work, as Feynman points out.

This elevation of a mere possibility to a truth, and then the use of this truth to convince oneself one has the correct theory, is a rather large extrapolation

.’

Note that I also read those Feynman lectures on gravity when Penguin books brought them out in paperback a few years ago and saw the same thing, although I hated reading the abject speculation in them where Feynman suggests that the strength ratio of gravity to electromagnetism is like the ratio of the radius of the universe to the radius of a proton, without any mechanism or dynamics. Tony Smith quotes a bit of them on his site which I re-quote on

On my home page there are three main sections dealing with the gravity mechanism dynamics, namely near the top of

http://feynman137.tripod.com/ (scroll down to first illustration), at http://feynman137.tripod.com/#a and for technical calculations predicting strength of gravity accurately at http://feynman137.tripod.com/#h. The first discussion, near the top of the page, explains how shielding occurs: ‘… If you are near a mass, it creates an asymmetry in the radiation exchange, because the radiation normally received from the distant masses in the universe is red-shifted by high speed recession, but the nearby mass is not receding significantly. By Newton’s 2nd law the outward force of a nearby mass which is not receding (in spacetime) from you is F = ma = mv/t = mv/(x/c) = mcv/x = 0. Hence by Newton’s 3rd law, the inward force of gauge bosons coming towards you from that mass is also zero; there is no action and so there is no reaction. As a result, the local mass shields you, creating an asymmetry. So you get pushed towards the shield. This is why apples fall. …’ This brings up the issue of how electromagnetism works. Obviously, the charges of gravity and electromagnetism are different: masses don’t have the symmetry properties of the electric charge. For example, mass increases with velocity, while electric charge doesn’t. I’ve dealt with this in the last couple of posts on this blog, but unification physics is a big field and I’m still making progress. One comment about the spin. Fermions have half-integer spin which means they are like a Mobius strip, requiring 720 degrees of rotation for a complete exposure of their surface. Fermi-Dirac statistics describe such particles. Bosons have integer spin and spin-1 bosons are relatively normal in that they only require 360 degrees of rotation for a complete revolution. Spin-2 bosons gravitons presumably require only 180 degrees of rotation per revolution, so appear stringy to me. I think the exchange radiation of gravity and electromagnetism is the same thing - based on the arguments in previous posts - and is spin-1 radiation, albeit continuous radiation. It is quite possible to have continuous radiation in a Dirac sea, just as you can have continuous waves composed of molecules in a water based sea.]

A fruitful natural philosophy has a double scale or ladder ascendant and descendant; ascending from experiments to axioms and descending from axioms to the invention of new experiments. - Novum Organum.

This would allow LQG to be built as a bridge between path integrals and general relativity. I wish Smolin or Woit would pursue this.

Light ... "smells" the neighboring paths around it, and uses a small core of nearby space. (In the same way, a mirror has to have enough size to reflect normally: if the mirror is too small for the core of nearby paths, the light scatters in many directions, no matter where you put the mirror.)

- Feynman, QED, Penguin, 1990, page 54.

That's wave particle duality explained. The path integrals don't mean that the photon goes on all possible paths but as Feynman says, only a "small core of nearby space".

The double-slit interference experiment is very simple: the photon has a transverse spatal extent. If that overlaps two slits, then the photon gets diffracted by both slits, displaying interference. This is obfuscated by people claiming that the photon goes everywhere, which is not what Feynman says. It doesn't take every path: most of the energy is transferred along the classical path, and is near that.

Similarly, you find people saying that QFT says that the vacuum is full of loops of annihilation-creation. When you check what QFT says, it actually says that those loops are limited to the region between the IR and UV cutoff. If loops existed everywhere in spacetime, ie below the IR cutoff or beyond 1 fm, then the whole vacuum would be polarized enough to cancel out all real charges. If loops existed beyond the UV cutoff, ie to zero distance from a particle, then the loops would have infinite energy and momenta and the effects of those loops on the field would be infinite, again causing problems.

So the vacuum simply isn't full of loops (they only extend out to 1 fm around particles). Hence no dark energy mechanism.

String theory

Mainstream string theory or M-theory (due to Witten, 1995) theory is the 10 dimensional superstring / 11 dimensional supergravity unification which can't predict anything potentially checkable. It says that there are 10 dimensions of particle physics predicting 10^500 or so different Standard Models (because particle properties can take many values due to the many parameters of size and shape for the complex 6-dimensional Calabi-Yau manifold, which compactifies 6 of the 10 dimensions to give 4-d spacetime), each in a parallel universe! M-theory says that 10-dimensional superstring theory is a (mem)brane on 11-dimensional hyperspace of supergravity, like a 2-dimensional flat credit card containing a 3-dimensional hologram or 3 dimensional space containing ‘curvature’ due to time dimension(s). Despite all the ad hoc speculation, M-theory can’t give any checkable physics!

Unobservable extra dimensions curled up into imaginary Planck scale Calabi-Yau manifold strings, and there is postulated 1:1 boson:fermion supersymmetric partners for all Standard Model particles, to achieve ever-unobservable unification at the Planck scale. Watch how string theory dances around to impress the public without giving any real physics! It cannot ever go away because it is not a falsifiable theory. So after being ridiculed and dismissed, it always survives and come back again to sneer at alternatives which are checkable!

Euclidean geometry is disproved by the curvature of caused by gravitational fields. The best example of this, which helps to clearly explain the entire problem, is not the deflection of light - after all bullets can be similarly deflected by wind, but that is obviously not taken to disprove Euclid - but the contraction implied by general relativity. The radius of the earth is contracted by (1/3)MG/c2 = 1.5 millimetres, but the circumference - because it is orthagonal to the gravitational field lines - suffers no contraction. Since circumference divided by radius equals the ratio

p , it follows that for this ratio to be unaffected by contraction there must be a fourth dimension, so that the three observable dimensions are distorted by curvature. This is by analogy to the way that two dimensional geometrical diagrams drawn on a curved background suffer distortions. For example, try drawing a geometric diagram on the surface of a globe; rules for Euclidean plane geometry for the relationship between angles and lengths will generally be inaccurate and need corrections.

Another, physically equivalent, way of interpreting the contraction and all the other effects of general relativity is by causal mechanism of Yang-Mills exchange radiation in just three dimensions. This mechanism is completely compatible with the mathematical theory of general relativity. In this situation, there are no extra dimensions. The contraction term in general relativity - which causes all of the departures from the predictions of Newtonian three-dimensional gravitation - is then due to physical compression along radial lines. Because there is no transverse (circumference) contraction, the reduction in radius can be interpreted as a predictable change in the observable value of

p , should it be possible to measure this.

However, the extra dimensional speculation on general relativity, reinforced by confirmation of general relativity in various experimental tests, has led to a hardening of orthodoxy in favour of the real existence of extra dimensions. Although general relativity is 3 + 1 dimensional, the extra dimension being treated as a resultant (time), the Kaluza-Klein theory adds still another (fifth) dimension which is gives a way of combining electromagnetism and gravitation qualitatively (it makes no checkable predictions) through general relativity. The extra dimension was supposed to be rolled up into a small loop that constitutes a particle of matter. Vibrations of the loop or closed string allow it to represent different energy states, each corresponding to the different fundamental particles. There is no checkable prediction from this theory, not even the size of the loop, which is postulated to be Planck size due to Planck's fame. Planck's length - which he based on arbitrary dimensional analysis - is far bigger (G1/2h1/2c-3/2 ~ 10-35 m) than the black hole radius of an electron (2GM/c2 = 1.3 x 10-57 m) and so it is highly suspect whether the dimensional analysis numerology of the Planck size belies any real physics. The rest-mass energies of particles cannot be predicted from string theory. Later, the ad hoc suggestion was made that the Calabi-Yau six dimensional manifold be included in the string theory, leading to 10/11 dimensional superstrings/supergravity (unified by ideas like Witten's M-theory and the holographic conjecture) with a 'landscape' of 10350 or so values of the quantum field theory vacuum energy ground state.

The correct way to predict gravity is to build upon experimental facts. At the time general relativity was built, in November 1915 by Hilbert and Einstein, it was not known that the matter of the universe is receding in all directions, nor that the recession is not being slowed with gravity. Einstein in his 1916 reconciliation of general relativity with cosmology, adopted a 'steady state' theory which has subsequently been disproved by observations. There are many cranks who don't like nature the way observation shows it to be, and don't like the big bang in any form. Generally they prefer to invent a completely speculative theory that red-shifted spectra are 'somehow' being red-shifted by a cause other than recession, and that the universe is in a steady state. In fact, none of these theories are consistent with the observations. The spectrum of light made red by gas or dust scattering is entirely different from the uniform frequency-independent red-shift seen in the recession of distant clusters of galaxies. The recession red-shift theory is easily experimentally proved to be correct by the fact that recession of a light source does cause the light received to be red-shifted in exactly the same way as the red-shift from distant clusters of galaxies. The alternative (steady-state) theories all involve inventing unobserved, unscientific, 'explanations' and ignoring the proved (recession) mechanism. Professor Ned Wright has stated: 'There is no known interaction that can degrade a photon's energy without also changing its momentum, which leads to a blurring of distant objects which is not observed. The

The correct theory of quantum gravity to describe general relativity, applied to cosmology, must discriminate between the big bang induced cosmic expansion and the contraction of the dimensions describing matter due to gravity. There are three expanding dimensions in the big bang cosmology and three dimensions for matter that are contracted by motion and by gravitation.

Yang-Mills quantum field theory is abstract yet suggests physical dynamics: exchange of gauge bosons causes forces. This is clearly displaced by familiar Feynman diagrams depicting fundamental force exchange radiations. Via the October 1996 issue of the journal Electronics World, a mechanism was made available in an eight pages long.

Dr Thomas Love has proved that the entanglement philosophy is just a statement of the mathematical discontinuity between the time-dependent and time-independent Schroedinger wave equations when a measurement is taken. There’s no evidence for metaphysical wave function collapse in either the authority of Niels Bohr, the Solvay Congress of 1927, or Alain Aspect’s determination that the polarization of photons emitted in opposite directions by an electron correlate when measured metres apart.

Copenhagen quantum mechanics is speculative. So don’t build it up as a pet religion. The uncertainty principle in the Dirac sea has a perfectly causal explanation: on small distance scales, particles get randomly accelerated/decelerated/deflected by the virtual particles of the spacetime vacuum. This is like Brownian motion. On large scales, the interactions cancel out. If so, then photon polarizations correlate not because of metaphysical "wavefunction entanglement" but because the uncertainty principle doesn’t apply to measurements on light speed bosons, and only to massive fermions which are still there after you actually detect them.

A loop is a rotational transformation in the vacuum. The loop physically the exchange of energy-delivering field radiation from one mass to another, and back to the first mass again. Like the exchange radiation in Yang-Mills (Standard Model) theories, but with the added restriction of the conservation (looping between masses) of the exchange radiation? Things accelerated by a gravity field are losing gravitational potential energy and gaining kinetic energy, so the exchange radiation carries energy. If the LQG spinfoam vacuum does describes a Yang-Mills energy exchange scheme, you can get solid checkable predictions by taking account of the effect of the expansion of the universe on these conserved gravity field mediators.

If you observe two supernovae at the same time, you can in fact determine which occurred first by simply noting from their redshifts how far they are from you in time and space, and hence how long after the big bang each occurred. Hence there is an absolute time scale. Special relativity as usually taught denies absolute chronology, which doesn’t work where you can place absolute chronology on events like supernovae. A better theory will clearly separate the treatment of the expanding big bang spacetime dimensions (which measure the volume of the vacuum), from the local contractable/time dilation-able dimensions used for matter like clocks & rulers. Matter is contracted (in spacetime) by motion and gravity. But the big bang’s spacetime continues expanding. Hence the mathematical treatment of the universe needs to clearly distinguish between the 3 perpetually expanding spacetime dimensions for the volume of the universe, and the 3 contractable dimensions used to describe matter. When Einstein and Hilbert built general relativity in November 1915, they simply didn’t know that the volume of the vacuum was perpetually expanding. People thought it was static.

Mechanism of electromagnetism

Above: mechanism of attraction and repulsion in electromagnetism, and the capacitor summation of displacement current energy flowing between accelerating (spinning) charges as gauge bosons (by analogy to Prevost’s 1792 model of constant temperature as a radiation equilibrium). The net exchange is like two machine gunners firing bullets at each other; they recoil apart. The gauge bosons pushing them together are redshifted, like nearly spent bullets coming from a great distance, and are not enough to prevent repulsion. In the case of attraction, the same principle applies. The two opposite charges shield one another and get pushed together. Although each charge is radiating and receiving energy on the outer sides, the inward push is from redshifted gauge bosons, and the emission is not redshifted. The result is just like two people, standing back to back, firing machine guns. The recoil pushes them together, hence the attraction force.

‘As I proceeded with the study of Faraday, I perceived that his method of conceiving the phenomena was also a mathematical one, though not exhibited in the conventional form of mathematical symbols. I also found that these methods were capable of being expressed in the ordinary mathematical forms … For instance, Faraday, in his mind’s eye, saw lines of force transversing all space where the mathematicians saw centres of force attracting at a distance: Faraday saw a medium where they saw nothing but distance: Faraday sought the seat of the phenomena in real actions going on in the medium, they were satisfied that they had found it in a power of action at a distance…’ – Dr J. Clerk Maxwell, Preface, A Treatise on Electricity and Magnetism, 1873.

‘In fact, whenever energy is transmitted from one body to another in time, there must be a medium or substance in which the energy exists after it leaves one body and before it reaches the other… I think it ought to occupy a prominent place in our investigations, and that we ought to endeavour to construct a mental representation of all the details of its action…’ – Dr J. Clerk Maxwell, conclusion, A Treatise on Electricity and Magnetism, 1873 edition.

‘Statistical Uncertainty. This is the kind of uncertainty that pertains to fluctuation phenomena and random variables. It is the uncertainty associated with ‘honest’ gambling devices…

‘Real Uncertainty. This is the uncertainty that arises from the fact that people believe different assumptions…’ – H. Kahn & I. Mann, Techniques of systems analysis, RAND, RM-1829-1, 1957.

Let us deal with the physical interpretation of the periodic table using quantum mechanics very quickly. Niels Bohr in 1913 came up with an orbit quantum number, n, which comes from his theory and takes positive integer values (1 for first or K shell, 2 for second or M shell, etc.). In 1915, Arnold Sommerfeld (of 137-number fame) introduced an elliptical-shape orbit number, l, which can take values of n –1, n – 2, n – 3, … 0. Back in 1896 Pieter Zeeman introduced orbital direction magnetism, which gives a quantum number m with possible values l, l – 1, l – 2, …, 0, … - (l- 2), -(l – 1), -l. Finally, in 1925 George Uhlenbeck and Samuel Goudsmit introduced the electron’s magnetic spin direction effect, s, which can only take values of +1/2 and –1/2. (Back in 1894, Zeeman had observed the phenomenon of spectral lines splitting when the atoms emitting the light are in a strong magnetic field, which was later explained by the fact of the spin of the electron. Other experiments confirm electron spin. The actual spin is in units of h/(2

p ), so the actual amounts of angular spin are + ½ h/(2p ) and – ½ h/(2p ). ) To get the periodic table we simply work out a table of consistent unique sets of quantum numbers. The first shell then has n, l, m, and s values of 1, 0, 0, +1/2 and 1, 0, 0, -1/2. The fact that each electron has a different set of quantum numbers is called the ‘Pauli exclusion principle’ as it prevents electrons duplicating one another. (Proposed by Wolfgang Pauli in 1925; note the exclusion principle only applies to fermions with half-integral spin like the electron, and does not apply to bosons which all have integer spin, like light photons and gravitons. While you use fermi-dirac statistics for fermions, you have to use bose-einstein statistics for bosons, on account of spin. Non-spinning particles, like gas molecules, obey maxwell-boltzmann statistics.) Hence, the first shell can take only 2 electrons before it is full. (It is physically due to a combination of magnetic and electric force effects from the electron, although the mechanism must be officially ignored by order of the Copenhagen Interpretation ‘Witchfinder General’, like the issue of the electron spin speed.)

For the second shell, we find it can take 8 electrons, with l = 0 for the first two (an elliptical subshell is we ignore the chaos effect of wave interactions between multiple electrons), and l = 1 for next other 6.

Experimentally we find that elements with closed full shells of electrons, i.e., a total of 2 or 8 electrons in these shells, are very stable. Hence, helium (2 electrons) and Argon (2 electrons in first shell and 8 electrons filling second shell) will not burn. Now read the horses*** from ‘expert’ Sir James Jeans:

‘The universe is built so as to operate according to certain laws. As a consequence of these laws atoms having certain definite numbers of electrons, namely 6, 26 to 28, and 83 to 92, have certain properties, which show themselves in the phenomena of life, magnetism and radioactivity respectively … the Great Architect of the Universe now begins to appear as a pure mathematician.’ – Sir James Jeans, MA, DSc, ScD, LLD, FRS, The Mysterious Universe, Penguin, 1938, pp. 20 and 167.

One point I’m making here, aside from the simplicity underlying the use of quantum mechanics, is that it has a physical interpretation for each aspect (it is also possible to predict the quantum numbers from abstract mathematical ‘law’ theory, which is not mechanistic, so is not enlightening). Quantum mechanics is only statistically exact if you have one electron, i.e., a single hydrogen atom. As soon as you get to a nucleus plus two or more electrons, you have to use mathematical approximations or computer calculations to estimate results, which are never exact. This problem is not the statistical problem (uncertainty principle), but a mathematical problem in applying it exactly to difficult situations. For example, if you estimate a 2% probability with the simple theory, it is exact providing the input data is reliable. But if you have 2 or more electrons, the calculations estimating where the electron will be will have an uncertainty, so you might have 2% +/- a factor of 2, or something, depending on how much computer power and skill you use to do the approximate solution.

Derivation of the Schroedinger equation (an extension of a

Wireless World heresy of the late Dr W. A. Scott-Murray), a clearer alternative to Bohm’s ‘hidden variables’ work…

The equation for waves in a three-dimensional space, extrapolated from the equation for waves in gases:

Ñ

2Y = -Y (2p f/v)2

where

Y is the wave amplitude. Notice that this sort of wave equation is used to model waves in particle-based situations, i.e., waves in situations where there are particles of gas (gas molecules, sound waves). So we have particle-wave duality resolved by the fact that any wave equation is a statistical model for the orderly/chaotic group behaviour of (3+ body Poincare chaos). The term Ñ2Y is just a shorthand (the ‘Laplacian operator’) for the sum of second-order differentials: Ñ2Y = d2 Y x /dx2 + d2 Yy /dy2 + d2 Yz /dz2. (Another popular use for the Laplacian operator is heat diffusion when convection doesn’t happen – such as in solids, since the rate of change of temperature, dT/dt = (k /Cv).Ñ2 T, where k is thermal conductivity and Cv is specific heat capacity measured under fixed volume.) The symbol f is frequency of the wave, while v is velocity of the wave. Now 2p is in there because f/v has units of reciprocal metres, so 2p is needed to make this ‘reciprical metres’ into ‘reciprocal wavelength’. Get it?

All waves behave the wave axiom, v =

l f, where l is wavelength. Hence:

Ñ

2Y = -Y (2p /l )2.

Louis de Broglie, who invented ‘wave-particle duality’ (as waves in the physical, real ether, but that part was suppressed), gave us the de Broglie equation for momentum: p = mc = (E/c2)c = [(hc/

l )/c2]c = h/l . Hence:

Ñ

2Y = -Y (2p mv/h)2.

Isaac Newton’s theory suggests the equation for kinetic energy E = ½ mv2 (although the term ‘kinetic theory’ was I think first used in an article published in a magazine edited by Charles Dickens, a lot later). Hence, v2 = 2E/m. So we obtain:

Ñ

2Y = -8Y mE(p /h)2.

Finally, the total energy, W, for an electron is in part electromagnetic energy U, and in part kinetic energy E (already incorporated). Thus, W = U + E. This rearranges using very basic algebra to give E = W – U. So now we have:

Ñ

2Y = -8Y m(W – U).(p /h)2.

This is Schroedinger’s basic equation for the atomic electron! The electromagnetic energy U = -qe2/(4

pe R) where qe is charge of the electron, and e is the electric permittivity of the spacetime vacuum or ether. By extension of Pythagoras’ theorem into 3 dimensions, R = (x2 + y2 + z2) ½. So now we understand how to derive the Schroedinger’s basic wave equation, and as Dr Scott-Murray pointed out in his Wireless World series of the early 1980s, it’s child’s play. It would be better to teach this to primary school kids to illustrate the value of elementary algebra, than hide it as heresy or unorthodox, contrary to Bohr’s mindset!

Let us now examine the work of Erwin Schroedinger and Max Born. Since the nucleus of hydrogen is 1836 times as massive as the electron, it can in many cases be treated as at rest, with the electron zooming around it. Schroedinger in 1926 took the concept of particle-wave duality and found an equation that could predict the probability of an electron being found within any distance of the nucleus. The full theory includes, of course, electron spin effects and the other quantum numbers, and so the mathematics at least looks a lot harder to understand than the underlying physical reality that gives rise to it.

First, Schroedinger could not calculate anything with his equation because he had no idea what the hell he was doing with the wavefunction

Y . Max Born naively, perhaps, suggested it is like water waves, where it is an amplitude of the wave that needs to be squared to get the energy of the wave, and thus a measure of the mass-energy to be found within a given space. (Likewise, the ‘electric field strength’ (volts/metre) from a radio transmitter mast falls off generally as the inverse of distance, although the energy intensity (watts per square metre) falls off as the inverse-square law of distance.)

Hence, by Born’s conjecture, the energy per unit volume of the electron around the atom is E ~

Y2. If the volume is a small, 3 dimensional cube in space, dx.dy.dz in volume, then the proportion of (or probability of finding) the electron within that volume will thus be: dx.dy.dz.Y2 /[òòòY2 dx.dy.dz]. Here, ò is the integral from 0 to infinity. Thus, the relative likelyhood of finding the electron in a thin shell between radii of r and a will be the integral of the product of surface area (4p r2) and Y2, over the range from r to a. The number we get from this integral is converted into an absolute probability of finding the electron between radii r and a by normalising it: in other words, dividing it into the similarly calculated relative probability of finding the electron anywhere between radii of 0 and infinity. Hence we can understand what we are doing for a hydrogen atom.

The version of Schroedinger’s wave equation above is really a description of the time-averaged (or time-independent) chaotic motion of the electron, which is why it gives a probability of finding the electron at a given zone, not an exact location for the electron. There is also a time-dependent version of the Schroedinger wave equation, which can be used to obfuscate rather well. But let’s have a go anyhow. To find the time-dependent version, we need to treat the electrostatic energy U as varying in time. If U = hf, from de Broglie’s use of Planck’s equation, and because the electron behaves the wave equation, its time-dependent frequency is: f2 = -(2

pY )-2 (dY /dt)2 where f2 = U2 /h2. Hence, U2 = -h2 (2pY )-2 (dY /dt)2. To find U we need to remember from basic algebra that we will lose possible mathematical solutions unless we allow for the fact that U may be negative. (For example, if I think of a number, square it, and then get 4, that does not mean I thought of the number 2: I could have started with the number –2.) So we need to introduce i = Ö (-1). Hence we get the solution: U = ih(2pY )-1 (dY /dt). Remembering E = W – U, we get the time-dependent Schroedinger equation.

Let us now examine how fast the electrons go in the atom in their orbits, neglecting spin speed. Assuming simple circular motion to begin with, the inertial ‘outward’ force on the electron is F = ma = mv2/R, which is balanced by electric ‘attractive’ inward force of F = (qe/R)2/(4

pe ). Hence, v = ½qe /(pe Rm)1/2.

Now for Werner Heisenberg’s ‘uncertainty principle’ of 1927. This is mathematically sound in the sense that the observer always disturbs the signals he observes. If I measure my car tyre pressure, some air leaks out, reducing the pressure. If you have a small charged capacitor and try to measure the voltage of the energy stored in it with an old fashioned analogue volt meter, you will notice that the volt meter itself drains the energy in the capacitor pretty quickly. A digital meter contains an amplifier, so the effect is less pronounced, but it is still there. A geiger counter held in fallout area absorbs some of the gamma radiation it is trying to measure, reducing the reading, as does the presence of the body of the person using it. A blind man searching for a golf ball by swinging a stick around will tend to disturb what he finds. When he feels and hears the click of the impact of his stick hitting the golf ball, he knows the ball is no longer where it was when he detected it. If he prevents this by not moving the stick, he never finds anything. So it is a reality that the observer always tends to disturb the evidence by the very process of observing the evidence. If you even observe a photograph, the light falling on the photograph very slightly fades the colours. With something as tiny as an electron, this effect is pretty severe. But that does not mean that you have to make up metaphysics to stagnate physics for all time, as Bohr and Heisenberg did when they went crazy. Really, Heisenberg’s law has a simple causal meaning to it, as I’ve just explained. If I toss a coin and don’t show you the result, do you assume that the coin is in a limbo, indeterminate state between two parallel universes, in one of which it is heads and in the other of which it landed tails? (If you believe that, then maybe you should have yourself checked into a mental asylum where you can write your filthy equations all over the walls with a crayon held between your big ‘TOEs’ or your ‘theories of everything’.)

For the present, let’s begin right back before QFT, in other words with the classic theory back in 1873:

Fiat Lux: ‘Let there be Light’

Michael Faraday, Thoughts on Ray Vibrations, 1846. Prediction of light without numbers by the son of a blacksmith who became a bookseller’s delivery boy aged 13 and invented electric motor, generator, etc.

James Clerk Maxwell, A Dynamical Theory of the Electromagnetic Field, 1865. Fiddles with numbers.

I notice that the man (J.C. Maxwell) most often attributed with Fiat Lux wrote in his final (1873) edition of his book A Treatise on Electricity and Magnetism, Article 110:

‘... we have made only one step in the theory of the action of the medium. We have supposed it to be in a state of stress, but we have not in any way accounted for this stress, or explained how it is maintained...’

In Article 111, he admits further confusion and ignorance:

‘I have not been able to make the next step, namely, to account by mechanical considerations for these stresses in the dielectric [spacetime fabric]... When induction is transmitted through a dielectric, there is in the first place a displacement of electricity in the direction of the induction...’

First, Maxwell admits he doesn’t know what he’s talking about in the context of ‘displacement current’. Second, he talks more! Now Feynman has something about this in his lectures about light and EM, where he says idler wheels and gear cogs are replaced by equations. So let’s check out Maxwell's equations.

One source is A.F. Chalmers’ article, ‘Maxwell and the Displacement Current’ (Physics Education, vol. 10, 1975, pp. 45-9). Chalmers states that Orwell’s novel 1984 helps to illustrate how the tale was fabricated:

‘… history was constantly rewritten in such a way that it invariably appeared consistent with the reigning ideology.’

Maxwell tried to fix his original calculation deliberately in order to obtain the anticipated value for the speed of light, proven by Part 3 of his paper, On Physical Lines of Force (January 1862), as Chalmers explains:

‘Maxwell’s derivation contains an error, due to a faulty application of elasticity theory. If this error is corrected, we find that Maxwell’s model in fact yields a velocity of propagation in the electromagnetic medium which is a factor of

Ö 2 smaller than the velocity of light.’

It took three years for Maxwell to finally force-fit his ‘displacement current’ theory to take the form which allows it to give the already-known speed of light without the 41% error. Chalmers noted: ‘the change was not explicitly acknowledged by Maxwell.’

Weber, not Maxwell, was the first to notice that, by dimensional analysis (which Maxwell popularised), 1/(square root of product of magnetic force permeability and electric force permittivity) = light speed.

Maxwell after a lot of failures (like Keplers trial-and-error road to planetary laws) ended up with a cyclical light model in which a changing electric field creates a magnetic field, which creates an electric field, and so on. Sadly, his picture of a light ray in Article 791, showing in-phase electric and magnetic fields at right angles to one another, has been accused of causing confusion and of being incompatible with his light-wave theory (the illustration is still widely used today!).

In empty vacuum, the divergences of magnetic and electric field are zero as there are no real charges.

Maxwell’s equation for Faraday’s law: dE/dx = -dB/dt

Maxwell’s equation for displacement current: -dB/dx =

me .dE/dt

where

m is magnetic permeability of space, e is electric permittivity of space, E is electric field strength, B is magnetic field strength. To solve these simultaneously, differentiate both:

d2E/dx2 = - d2B/(dx.dt)

-d2B/(dx.dt) =

me . d2E/dt2

Since d2B /(dx.dt) occurs in each of these equations, they are equivalent, so Maxwell got dx2/dt2 = 1/(

me ), so c = 1/Ö (me ) = 300,000 km/s. Eureka! This is the lie, the alleged unification of electricity and magnetism via light. I think ‘Fiat Lux’ is a good description of Maxwell’s belief in this ‘unification’. Maxwell arrogantly and condescendingly tells us in his Treatise that ‘The only use made of light’ in finding m and e was to ‘see the instrument.’ Sadly it was only in 1885 that J.H. Poynting and Oliver Heaviside independently discovered the ‘Poynting-Heaviside vector’ (Phil. Trans. 1885, p277). Ivor Catt (http://www.ivorcatt.org/) has plenty of material on Heaviside’s ‘energy current’ light-speed electricity mechanism, as an alternative to the more popular ~1mm/s ‘electric current’. The particle-wave problem of electricity was suppressed by mathematical obfuscation and ignorant officialdom still ignores the solution which Catt’s work ultimately implies (that the electron core is simply a light-speed, gravitationally trapped TEM wave). We can see why Maxwell’s errors persisted:

‘Maxwell discussed … in terms of a model in which the vacuum was like an elastic … what counts are the equations themselves and not the model used to get them. We may only question whether the equations are true or false … If we take away the model he used to build it, Maxwell’s beautiful edifice stands…’ – Richard P. Feynman, Feynman Lectures on Physics, v3, c18, p2.

‘The creative period passed away … The past became sacred, and all that it had produced, good and bad, was reverenced alike. This kind of idolatry invariably springs up in that interval of languor and reaction which succeeds an epoch of production. In the mind-history of every land there is a time when slavish imitation is inculcated as a duty, and novelty regarded as a crime… The result will easily be guessed. Egypt stood still… Conventionality was admired, then enforced. The development of the mind was arrested; it was forbidden to do any new thing.’ – W.W. Reade, The Martyrdom of Man, 1872, c1, War.

‘What they now care about, as physicists, is (a) mastery of the mathematical formalism, i.e., of the instrument, and (b) its applications; and they care for nothing else.’ – Karl R. Popper, Conjectures and Refutations, R.K.P., 1969, p100.

‘The notion that light possesses gravitating mass, and that therefore a ray of light from a star will be deflected when it passes near the sun, was far from being a new one, for it had been put forward in 1801 by J. Soldner…’ – Sir Edmund Whittaker, A History of the Theories of Aether and Electricity: Modern Theories, 1900-1926, Nelson and Sons, London, 1953, p40.

It doesn't take genius for me to see that general relativity deals with absolute acceleration, while special relativity doesn't, so special relativity is incomplete and therefore wrong if misused. Some of the crackpots have some useful ideas scattered in their papers, which is exactly the case with Kepler.

Kepler thought magnetism held the earth in orbit around the sun, and was wrong. He also earned a living by astrology and his mother was prosecuted on a charge of witchcraft. But instead of calling Kepler a complete 100% crackpot, Newton had the wit to focus on what kepler had done right, the three laws of planetary motion, and used them to get the correct law of gravity for low speeds and weak fields (the limit in general relativity). I don't think anyone will go down as a good person for calling misguided people crackpots. The harder task is making sense of it, not blacklisting people because they make some errors or don't have the benefit of a good education! In fact, there are not millions of crackpots with testable mechanisms that seem to be consistent with major physics. The number is about 5, and includes D.R. Lunsford and Tony Smith, both censored off arXiv.org. Ivor Catt has a little useful material on electromagnetism from experiments, but mixes it with a lot of political diatribe. Basically Catt's experimental work is an extension of Oliver Heaviside's 1893 work on the light speed model of electric energy transfer. Walter Babin has some correct ideas too, in particular the idea that there is a superforce which is basically electrical. However, he has not made as much with this idea as he could. Because the core electric force of the electron is 137 times Coulomb's observed electric force for an electron, unification should be seen as the penetration of the virtual polarised charge shield which reduces the core strength by the factor 1/137.

Darwin was trying to assert a simple model which was far from new. All Darwin had was 'technical' evidence. It was the sum of the evidence, added together, which made the simplicity convincing. Aristotle was of course a theorist but he did not dig deeply enough. In his work 'Physics' of 350 BC, Aristotle argued using logic. I don't think Darwin would like to be compared to Aristotle, or even Maxwell for that matter. Faraday would be a better alternative, because experiments and observations were more in Darwin's sphere than fiddling with speculative models that turned out to be false (elastic aether and mechanical gear cogs and idler wheel aether, in Maxwell's theory). Darwin would be more interested in unifying a superforce using all the available evidence, than guessing.

The unshielded electron core charge, Penrose speculates in 'Road to Reality', is 11.7 times the observed Coulomb force. His guess is that because the square root of 137.0... is used in quantum mechanics, that is the factor involved. Since the Heisenberg uncertainty formula d = hc/(2.Pi.E) works for d and E as realities in calculating the ranges of forces carried by gauge bosons of energy E, we can introduce work energy as E = Fd, which gives us the electron core (unshielded) force law: F = hc/(2.Pi.d^2). This is 137.0... times Coulomb. Therefore, Penrose's guess is wrong. Penrose has a nice heuristic illustration on page 677 of his tome, The Road to Reality. The illustration shows the electron core with the polarised sea of virtual charges, so that the virtual positrons are attracted close to the real electron core, while the virtual electrons are repelled further from the real core: ‘Fig. 26.10. Vacuum polarisation: the physical basis of charge renormalisation. The electron [core] E induces a slight charge separation in virtual electron-positron pairs momentarily created out of the vacuum. This somewhat reduces E’s effective charge [seen at a long distance] from its bare value – unfortunately by an infinite factor, according to direct calculation.’ Penrose gets it a bit wrong on page 678 where he says ‘the electron’s measured dressed charge is about 0.0854 [i.e., 1/square root of 137], and it is tempting to imagine that the bare value should be 1, say.’

In fact, the bare value in these units is 11.7, not 1, because the ratio of bare to veiled charge is 137, as the bare core electric force is hc/(2.Pi.x^2), proved on my home page, which is 137 times Coulomb. It the bare core charge is not completely ‘unobservable’ since in high energy collisions a substantial reduction of the 137 factor has been experimentally observed (Koltick, Physical Review Letters, 1997), showing a partial penetration of the polarised vacuum veil. The bare core of the electron, with a charge 137 times the vacuum-shielded one, is a reality. At early times in the big bang, collisions were energetic enough to penetrate through the vacuum to bare cores, so the force strengths unified. So we can use the heuristic approach to understand how strongly the polarised vacuum protects the electron (or other fundamental particle) core force strength; the numbers which are given for unification energy by quantum field theory abstract calculations. (You can’t dismiss the electron core model as being not directly observable unless you want to do the same for atomic nuclei!)

The physical mechanism does give rise to a lot of mathematics, but not the same type of useless mathematics that ‘string theory’ generates. Because ‘string theory’ falsely is worshipped as a religion, naturally the productive facts are ridiculed. The accurate predictions include the strengths of gravity, electroweak and strong nuclear forces, as well as solutions to the problems of cosmology and the correct ratios of some fundamental particles. Feynman correctly calculates the huge ratio of gravity attraction force to the repulsive force of electromagnetism for two electrons as 1/(4.17 x 1042 ). He then says: ‘It is very difficult to find an equation for which such a fantastic number is a natural root. Other possibilities have been thought of; one is to relate it to the age of the universe.’ He then says that the ratio of the time taken by light to cross the universe to the time taken by light to cross a proton is about the same huge factor. After this, he chucks out the idea because gravity would vary with time, and the sun’s radiating power varies as the sixth power of the gravity constant G. The error here is that there is no mechanism for Feynman’s idea about the times for light to cross things. Where you get a mechanism is for the statistical addition of electric charge (virtual photons cause electric force) exchanged between similar charges distributed around the universe. This summation does not work in straight lines, as equal numbers of positive and negative charges will be found along any straight line. So only a mathematical drunkard’s walk, where the net result is the charge of one particle times the square root of the number of particles in the universe, is applicable:

This means that the electric force is equal to gravity times the square root of the number of particles. Since the number of particles is effectively constant, the electric force varies with the gravity force! This disproves Feynman: suppose you double the gravity constant. The sun is then more compressed, but does this mean it releases 26 = 64 times more power? No! It releases the same. What happens is that the electric force between protons – which is called the Coulomb barrier – increases in the same way as the gravity compression. So the rise in the force of attraction (gravity) is offset by the rise in the Coulomb repulsion (electric force), keeping the proton fusion rate stable! However, Feynman also points out another effect, that the variation in gravity will also alter the size of the Earth’s orbit around the sun, so the Earth will get a bit hotter due to the distance effect if G rises, although he admits: ‘such arguments as the one we have just given are not very convincing, and the subject is not completely closed.’ Now the smoothness of the cosmic background radiation is explained by the lower value of G in the past (see discussion of the major predictions, further on). Gravity constant G is directly proportional to the age of the universe, t. Let’s see how far we get playing this game (I’m not really interested in it, but it may help to test the theory even more rigorously). The gravity force constant G and thus t are proportional to the electric force, so that if charges are constant, the electric permittivity varies as ‘1/t’, while the magnetic permeability varies directly with t. By Weber and Maxwell, the speed of light is c =1/(square root of the product of the permittivity and the permeability). Hence, c is proportional to 1/ [square root of {(1/t).(t)}] = constant. Thus, the speed of light does not vary in any way with the age of the universe. The strong nuclear force strength, basically F = hc/(2p d2 ) at short distances, is varying like gravity and electroweak forces, results in the implication that h is proportional to G and thus also to t.

Many ‘tests’ for variations in G assume that h is a constant. Since this is not correct, and G is proportional to h, the interpretations of such ‘tests’ are total nonsense, much as the Michelson-Morley experiment does not disprove the existence of the sea of gauge bosons that cause fundamental forces! At some stage this model will need to be applied rigorously to very short times after the big bang by computer modelling. For such times, the force ratios vary not merely because the particles of matter have sufficient energy to smash through the shielding veils of polarised virtual particles which surround the cores of particles, but also because the number of fundamental particles was increasing significantly at early times! Thus, soon after the big bang, the gravity and electromagnetic forces would have been similar. The strong nuclear force, because it is identical in strength to the unshielded electroweak force, would also have been the same strength because the energy of the particles would break right through the polarised shields. Hence, this is a unified force theory that really works! Nature is beautifully simple after all. Lunsford’s argument that gravity is a residual of the other forces is right.

The whole basis of the energy-time version of the uncertainty principle is going to be causal (random interactions between the gauge boson radiation, which constitutes the spacetime fabric).

Heuristic explanations of the QFT are required to further the basic understanding of modern physics. For example, Heisenberg’s minimum uncertainty (based on impossible gamma ray microscope thought experiment): pd = h/(2

p ), where p is uncertainty in momentum and d is uncertainty in distance. The product pd is physically equivalent to Et, where E is uncertainty in energy and t is uncertainty in time. Since, for light speed, d = ct, we obtain: d = hc/(2p E). This is the formula the experts generally use to relate the range of the force, d, to the energy of the gauge boson, E. Notice that both d and E are really uncertainties in distance and energy, rather than real distance and energy, but the formula works for real distance and energy, because we are dealing with a definite ratio between the two. Hence for 80 GeV mass-energy W and Z intermediate vector bosons, the force range is on the order of 10^-17 m. Since the formula d = hc/(2.Pi.E) therefore works for d and E as realities, we can introduce work energy as E = Fd, which gives us the strong nuclear force law: F = hc/(2p d^2). This inverse-square law is 137 times Coulomb’s law of electromagnetism.

History of gravity mechanism

Gravity is the effect of inward directed graviton radiation pressure of the inflow of the fabric of spacetime inwards to fill the volume left empty by the outward acceleration of galaxies in the big bang. LeSage-Feynman shadowing of the spacetime fabric – which is a light velocity radiation on the 4 dimensional spacetime we observe – pushes us downward. You can’t stop space with an umbrella, as atoms are mainly void through which space pressure propagates!

Newton’s 3rd empirical law states outward force has an equal and opposite reaction (inward or implosive force). The bomb dropped on Nagasaki used TNT around plutonium, an ‘implosion’ bomb. Half the force acted inward, an implosion that compressed the plutonium. The inward or implosion force of the big bang is apparently physical space pressure. Fundamental particles behave as small black holes (electrons, quarks) which shield space pressure. They are therefore pressed from all sides equally except the shielded side, so they are pushed towards masses. The proof (below) predicts gravity. A calculation using black hole electrons and quarks gives identical results.

This inward pressure makes the radius of the earth contract by a distance of 1.5-mm. This was predicted by Einstein’s general relativity, which Einstein in 1920 at Leyden University said proved that: ‘according to the general theory of relativity, space without ether [physical fabric] is unthinkable.’ The radius contraction, discussed further down this page, is GM/(3c2). (Professor Feynman makes a confused mess of it in his relevant volume of Lectures, c42 p6, where he gives his equation 42.3 correctly for excess radius being equal to predicted radius minus measured radius, but then on the same page in the text says ‘… actual radius exceeded the predicted radius …’ Talking about ‘curvature’ when dealing with radii is not helpful and probably caused the confusion. The use of Minkowski light ray diagrams and string ‘theory’ to obfuscate the cause of gravity with talk over ‘curved space’ stems to the false model of space by the surface of a waterbed, in which heavy objects roll towards one another. This model when extended to volume type, real, space shows that space has a pressurised fabric which is shielded by mass, causing gravity.) But despite this insight, Einstein unfortunately overlooked the Hubble acceleration problem and failed to make the link with the big bang, the mechanism of gravity, which is proved below experimentally with step by step mathematics. The gravitational contraction is radial only, not affecting the circumference, so there a difference between the true radius and that calculated by Euclidean geometry. Thus curved space using non-Euclidean geometry, or you can seek the physical basis of the pressure in the surrounding universe.

Dirac’s equation is a relativistic version of Schroedinger’s time-dependent equation. Schroedinger’s time dependent equation is a general case of Maxwell’s ‘displacement current’ equation. Let’s prove this.

First, Maxwell’s displacement current is i = dD/dt =

e .dE/dt In a charging capacitor, the displacement current falls as a function of time as the capacitor charges up, so: displacement current i = -e .d(v/x)/dt, [equation 1]

Where E has been replaced by the gradient of the voltage along the ramp of the step of energy current which is entering the capacitor (illustration above). Here x is the step width, x = ct where t is the rise time of the step.

The voltage of the step is equal to the current step multiplied by the resistance: v = iR. Maxwell’s concept of ‘displacement current’ is to maintain Kirchhoff’s and Ohm’s laws of continuity of current in a circuit for the gap interjected by a capacitor, so by definition the ‘displacement current’ is equal to the current in the wires which is causing it.

Hence [equation 1] becomes:

i = -

e .d(iR/x)/dt = -(e R/x).di/dt

The solution of this equation is obtained by rearranging to yield (1/i)di = -x.dt/(

e R), integrating this so that the left hand side becomes proportional to the natural logarithm of i, and the right hand side becomes -xt/(e R), and making each side a power of e to get rid of the natural logarithm on the left side:

it = ioe- x t /(

e R ).

Now

e = 1/(cZ), where c is light velocity and Z is the impedance of the dielectric, so:

it = ioe- x c Z t / R.

Capacitance per unit length of capacitor is defined by C = 1/(xcZ), hence:

it = ioe- t / RC.

Which is the standard capacitor charging result. This physically correct proof shows that the displacement current is a result of the varying current in the capacitor, di/dt, i.e., it is proportional to the acceleration of charge which is identical to the emission of electromagnetic radiation by accelerating charges in radio antennae. Hence the mechanism of ‘displacement current’ is energy transmission by electromagnetic radiation: Maxwell’s ‘displacement current’ i =

e .dE/dt by electromagnetic radiation induces the transient current it = ioe- t / RC. Now consider quantum field theory.

Schroedinger’s time-dependent equation is essentially saying the same thing as this electromagnetic energy mechanism of Maxwell’s ‘displacement current’: H

was falsified by the fact that, although the total mass-energy is then conserved, the resulting Schroedinger equation permits an initially localised electron to travel faster than light! This defect was averted by the Klein-Gordon equation, which states:

ħ2d2

y /dt2 = [(mc2)2 + p2c2]y ,

While this is physically correct, it is non-linear in only dealing with second-order variations of the wavefunction.

y = iħ.dy /dt) relativistic, by inserting for the hamiltonian (H) a totally new relativistic expression which differs from special relativity:

H =

apc + b mc2,

where p is the momentum operator. The values of constants

a and b can take are represented by a 4 x 4 = 16 component matrix, which is called the Dirac ‘spinor’.

The justification for Dirac’s equation is both theoretical and experimental. Firstly, it yields the Klein-Gordon equation for second-order variations of the wavefunction. Secondly, it predicts four solutions for the total energy of a particle having momentum p:

E =

± [(mc2)2 + p2c2]1/2.

Two solutions to this equation arise from the fact that momentum is directional and so can be can be positive or negative. The spin of an electron is

± ½ ħ = ± ½ h/(4p ). This explains two of the four solutions. The other two solutions are evident obvious when considering the case of p = 0, for then E = ± mc2.

This equation proves the fundamental distinction between Dirac’s theory and Einstein’s special relativity. Einstein’s equation from special relativity is E = mc2. The fact that in fact E =

± mc2, proves the physical shallowness of special relativity which results from the lack of physical mechanism in special relativity.

‘… Without well-defined Hamiltonian, I don’t see how one can address the time evolution of wave functions in QFT.’ - Eugene Stefanovich,

You can do this very nicely by grasping the mathematical and physical correspondence of the time-dependent Schrodinger to Maxwell’s displacement current i = dD/dt. The former is just a quantized complex version of the latter. Treat the Hamiltonian as a regular quantity as Heaviside showed you can do for many operators. Then the solution to the time dependent Schroedinger equation is: wavefunction at time t after initial time = initial wavefunction.exp(-iHt/[h bar])

This is an general analogy to the exponential capacitor charging you get from displacement current. Maxwell’s displacement curent is i = dD/dt where D is product of electric field (v/m) and permittivity. There is electric current in conductors, caused by the variation in the electric field at the front of a logic step as it sweeps past the electrons (which can only drift at a net speed of up to about 1 m/s) at light speed. Because the current flowing into the first capacitor plate falls off exponentially as it charges up, there is radio transmission transversely like radio from an antenna (radio power is proportional to the rate of charge of current in the antenna, which can be a capacitor plate). Hence the reality of displacement current is radio transmission. As each plate of a circuit capacitor acquires equal and opposite charge simultaneously, the radio transmission from each plate is an inversion of that from the other, so the superimposed signal strength away from the capacitor is zero at all times. Hence radio losslessly performs the role of induction which Maxwell attributed to aetherial displacement current. Schroedinger’s time-dependent equation says the product of the hamiltonian and wavefunction equals i[h bar].d[wavefunction]/dt, which is a general analogy to Maxwell’s i = dD/dt. The Klein-Gordon, and also Dirac’s equation, are relativized forms of Schroedinger’s time-dependent equation.

Maxwell never went right in the paper. He failed to recognise that any electric current involves electrons accelerating which in turn results in electromagnetic radiation. This in turn induces a current in the opposite direction in another conductor. If the other conductor is charging as the first conductor is discharging, then the conductors swap electromagnetic energy simultaneously. There is no loss externally as electromagnetic radiation because of the fact that the superimposed electromagnetic radiation signals from each conductor exactly cancel to zero:

The magnetic force is very important, notice the Pauli exclusion principle and its role in chemistry. Every electron has spin and hence a magnetic moment which is predicted by Dirac’s equation to within an accuracy of 0.116%. The 0.116% correction factor is given by the first vacuum (aether) coupling correction factor of quantum field theory of Schwinger, Feynman, Tomonaga: 1/(twice pi times the 137 factor) = 0.00116.

The magnetic field of the electron is always present, it is co-eternal with the electric field. Regrettably, Ivor Catt refuses to accept this or even to listen to the evidence! It is easy to do the Stern-Gerlach experiment that gives direct evidence of the magnetic moment of the electron. There are other experiments too. It is a simply experimental fact.

The mechanism of what is attributed to aetherial charge ‘displacement’ (and thus ‘displacement current’) is entirely compatible with existing quantum field theory, the Standard Model. The problem is that there is another mechanism, called electromagnetic radiation, which is also real. It predicts a lot of things, and induces currents and charges like ‘displacement current’. Of course there is some charge polarisation and displacement of the vacuum charges between two charged plates. However, that is a secondary effect. It doesn't cause the charging in the first place. It is slow, not light speed.

Maxwell had Faraday's 1846 ‘Thoughts on Ray Vibrations’ and Weber's 1856 discovery that 1 over the root of the product of electric and magnetic force law constants is a velocity of light.

This was his starting point. He was trying to fiddle around until he came up with an electromagnetic theory of light which produced Weber's empirical equation from a chain of theoretical reasoning based on Faraday’s electromagnetic light ray argument that a curling field produces a time varying field, etc. Half the theoretical input to the theory is Faraday’s own empirical law of induction, curl.E = -dB/dt.

The other half would obviously by symmetry have to a law of the form curl.B = u.dD/dt where u is permeability and D is electric displacement, D = eE, where e is permittivity and E is electric field strength. The inclusion of the constant u is obviously implied by the fact that the solution to the two equations (Faraday and this new law) must give the Weber light speed law. So you have to normalise (fiddle) the new law to produce the desired result, Weber’s empirical relationship between electric and magnetic constants and light velocity.

Maxwell had to come up with an explanation of the new law, and found dD/dt has the units of displacement current. Don’t worry about what he claims to have done in his papers, just concentrate on the maths and physics he knew and what his reasoning was: no scientist who gets published puts the true ‘eureka’ bathtime moments into the paper, they rewrite the facts as per Orwell’s ‘1984’ so that the theory appears to result in a logical way from pure genius.

In a photon in the vacuum, the peak amplitude is always the same, no matter how far it goes from its source. The height and also the energy of a water wave (which is a transverse wave like light, the oscillatory motion of matter being perpendicular to the propagation of the energy flow direction) decreases as it spreads out. Photons of light don't behave like transverse waves in this sense, the peak electric field in the light oscillation remains fixed. Gamma rays for example remain of same amplitude and energy, they don't decay into visible light inversely with distance or the square of distance.

If light was a water-wave transverse wave, as you get move away from a radioactive source the gamma rays would become x-rays, then ultraviolet, then violet, then other visible light, then infrared, then microwaves, then radio waves in accordance with the water wave equation. This doesn't happen.

The Standard Model

Quantum field theory describes the relativistic quantum oscillations of fields. The case of zero spin leads to the Klein-Gordon equation. However, everything tends to have some spin. Maxwell’s equations for electromagnetic propagating fields are compatible with an assumption of spin h/(2

p ), hence the photon is a boson since it has integer spin in units h/(2p ). Dirac’s equation models electrons and other particles that have only half unit spin, as known from quantum mechanics. These half-integer particles are called fermions and have antiparticles with opposite spin. Obviously you can easily make two electrons (neither the antiparticle of the other) have opposite spins, merely by having their spin axes pointing in opposite direction: one pointing up, one pointing down. (This is totally different from Dirac’s antimatter, where the opposite spin occurs while both matter and antimatter spin axes are pointed in the same direction. It enables the Pauli-pairing of adjacent electrons in the atom with opposite spins and makes most materials non-magnetic (since all electrons have a magnetic moment, everything would be potentially magnetic in the absence of the Pauli exclusion process.)

Two heavier, unstable (radioactive) relatives of the electron exist in nature: muons and tauons. They, together with the electron, are termed leptons. They have identical electric charge and spin to the electron, but larger masses, which enable the nature of matter to be understood, because muons and tauons decay by neutrino emission into electrons. Neutrinos and their anti-particle are involved in the weak force; they carry energy but not charge or detectable mass, and are fermions (they have half integer spin).

In addition to the three leptons (electron, muon, and tauon) there are six quarks in three families as shown in the table below. The existence of quarks is an experimental fact, empirically confirmed by the scattering patterns when high-energy electrons are fired at neutrons and protons. There is also some empirical evidence for the three colour charges of quarks in the fact that high-energy electron-positron collisions actually produce three times as many hadrons as predicted when assuming that there are no colour charges.

Notice that the major difference between the three families is the mass. The radioactivity of the muon (

m ) and tauon (t ) can be attributed to these being high-energy vacuum states of the electron (e). The Standard Model in its present form cannot predict these masses. Family 1 is the vital set of fermions at low energy and thus, so far as human life is concerned at present. Families 2 and 3 were important in the high-energy conditions existing within a fraction of a second of the Big Bang event that created the universe. Family 2 is also important in nuclear explosions such as supernovae, which produce penetrating cosmic radiation that irradiates us through the earth’s atmosphere, along with terrestrial natural radioactivity from uranium, potassium-40, etc. The t (top) quark in Family 3 was discovered as recently as 1995. There is strong evidence from energy conservation and other indications that there are only three families of fermions in nature.

The Heisenberg uncertainty principle is often used as an excuse to avoid worrying about the exact physical interpretation of the various symmetries structures of the Standard Model quantum field theory: the wave function is philosophically claimed to be in an indefinite state until a measurement is made. Although, as Thmoas S. Love points out, this is a misinterpretation based on the switch-over of Schroedinger wave equations (time-dependent and time-independent) at the moment when a measurement is made on the system, it keeps the difficulties of the abstract field theory to a minimum. Ignoring the differences in the masses between the three families (which has a separate mechanism), there are four symmetry operations relating the Standard Model fermions listed in the table above:

‘flavour rotation’ is a symmetry which relates the three families (excluding their mass properties),

‘electric charge rotation’ would transform quarks into leptons and vice-versa within a given family,

In 1954, Chen Ning Yang and Robert Mills developed a theory of photon (spin-1 boson) mediator interactions in which the spin of the photon changes the quantum state of the matter emitting or receiving it via inducing a rotation in a Lie group symmetry. The amplitude for such emissions is forced, by an empirical coupling constant insertion, to give the measured Coulomb value for the electromagnetic interaction. Gerald ‘t Hooft and Martinus Veltman in 1970 argued that the Yang-Mills theory is the only model for Maxwell’s equations which is consistent with quantum mechanics and the empirically validated results of relativity. The photon Yang-Mills theory is U(1). Equivalent Yang-Mills interaction theories of the strong force SU(3) and the weak force SU(2) in conjunction with the U(1) force result in the symmetry group set SU(3) x SU(2) x U(1) which is the Standard Model. Here the SU(2) group must act only on left-handed spinning fermions, breaking the conservation of parity.

Mediators conveying forces are called gauge bosons: 8 types of gluons for the SU(3) strong force, 3 particles (Z, W+, W-) for the weak force, and 1 type of photon for electromagnetism. The strong and weak forces are empirically known to be very short-ranged, which implies they are mediated by massive bosons unlike the photon which is said to be lacking mass although really it carries momentum and has mass in a sense. The correct distinction is not concerned with ‘the photon having no rest mass’ (because it is never at rest anyway), but is concerned with velocity: the photon actually goes at light velocity while all the other gauge bosons travel slightly more slowly. Hence there is a total of 12 different gauge bosons. The problem with the Standard Model at this point is the absence of a model for particle masses: SU(3) x SU(2) x U(1) does not describe mass and so is an incomplete description of particle interactions. In addition, the exact mechanism which breaks the electroweak interaction symmetry SU(2) x U(1) at low energy is speculative.

If renormalization is kicked out by Yang-Mills, then the impressive results which depend on renormalisation (Lamb shift, magnetic moments of electron and muon) are lost. SU(2) and SU(3) are not renormalisable.

Gravity is of course required in order to describe mass, owing to Einstein’s equivalence principle that states that gravitational mass is identical to, and indistinguishable from, inertial mass. The existing mechanism for mass in the Standard Model is the speculative (non-empirical) Higgs field mechanism. Peter Higgs suggested that the vacuum contains a spin-0 boson field which at low energy breaks the electroweak symmetry between the photon and the weak force Z, W+ and W- gauge bosons, as well as causing all the fermion masses in the Standard Model. Higgs did not predict the masses of the fermions, only the existence of an unobserved Higgs boson. More recently, Rueda and Haish showed that Casimir force type radiation in the vacuum (which is spin-1 radiation, not the Higgs’ spin-0 field) explains inertial and gravitational mass. The problem is that Rueda and Haish could not make particle mass or force strength predictions, and did not explain how electroweak symmetry is broken at low energy. Rueda and Haish have an incomplete model. The vacuum has more to it than simply radiation, and may be more complicated than the Higgs field. Certainly any physical mechanism capable of predicting particle masses and force strengths must be sophisticated than the existing Higgs field speculations.

Many set out to convert science into a religion by drawing up doctrinal creeds. Consensus is vital in politics and also in teaching subjects in an orthodox way, teaching syllabuses, textbooks, etc., but this consensus should be confused for science: it doesn’t matter how many people think the earth is flat or that the sun goes around it. Things are not determined in science by what people think or what they believe. Science is the one subject where facts are determined by evidence and even absolute proof: which is possible, contrary to Popper’s speculation, see Archimedes’ proof of the law of buoyancy for example. See the letter from Einstein to Popper that Popper reproduces in his book The Logic of Scientific Revolutions. Popper falsely claimed to have disproved the idea that statistical uncertainty can emerge from a deterministic situation.

Einstein disproved Popper by the case of an electron revolving in a circle at constant speed; if you lack exact knowledge of the initial conditions and cannot see the electron, you can only statistically calculate the probability of finding it at any section of the circumference of the circle. Hence, statistical probabilities can emerge from completely deterministic systems, given merely the uncertainty about the initial conditions. This is one argument of many that Einstein (together with Schroedinger, de Broglie, Bohm, etc.) had to argue determinism lies at the heart of quantum mechanics. However, the nature of all real 3+ body interactions in classically ‘deterministic’ mechanics are non-deterministic because of the perturbations introduced as chaos by more than two bodies. So there is no ultimate determinism in the real world many body situations. What Einstein should have stated he was looking for is causality, not determinism.

The uncertainty principle - which is just modelling scattering-driven reactions - shows that the higher the mass-energy equivalent, the shorter the lifetime.

Quarks and other heavy particles last for a fraction of 1% of the time that electrons and positrons in the vacuum last before annihilation. The question is, why do you get virtual quarks around real quark cores for QCD and virtual electrons and positrons around real electron cores. It is probably a question of the energy density of the vacuum locally around a charge core. The higher energy density due to the fields around a quark will both create and attract more virtual quarks than an electron that has weaker fields.

In the case of a nucleon, neutron or proton, the quarks are close enough that the strong core charge, and not the shielded core charge (reduced by 137 times due to the polarised vacuum), is responsible for the inter-quark binding force. The strong force seems to be mediated by eight distinct types of gluon. (There is a significant anomaly in the QCD theory here because there are physically 9 types of green, red, blue gluons but you have to subtract one variety from the 9 to rule out a reaction which doesn't occur in reality.

The gluon clouds around quarks are overlapped and modified by the polarised veils of virtual quarks, which is why it is just absurd to try to get a mathematical solution to QCD in the way you can for the simpler case of QED. In QED, the force mediators (virtual photons) are not affected by the polarised electron-positron shells around the real electron core, but in QCD there is an interaction between the gluon mediators and the virtual quarks.

You have to think also about electroweak force mediators, the W, Z and photons, and how they are distinguished from the strong force gluons. For the benefit of cats who read, W, Z and photons have more empirical validity than Heaviside's energy current theory speculation; they were discovered experimentally at CERN in 1983. At high energy, the W (massive and charged positive or negative), Z (massive and neutral), and photon all have symmetry and infinite range, but below the electroweak unification energy, the symmetry is broken by some kind of vacuum attenuation (Higgs field or other vacuum field miring/shielding) mechanism, so W and Z's have a very short range but photons have an infinite range. To get unification qualitatively as well as quantitatively you have to not only make the extremely high energy forces all identical in strength, but you need to consider qualitatively how the colored gluons are related to the W, Z, and photon of electroweak theory. The Standard Model particle interaction symmetry groupings are SU(3) x SU(2) x U(1), where U(1) describes the photon, SU(2) the W and Z of the weak force, hence SU(2) x U(1) is electroweak theory requiring a Higgs or other symmetry breaking mechanism to work, and SU(3) describes gluon mediated forces between three strong-force color charges of quarks, red, green and blue or whatever.

The problem with the gluons having 3 x 3 = 9 combinations but empirically only requiring 8 combinations, does indicate that the concept of the gluon is not the most solid part of the QCD theory. It is more likely that the gluon force is just the unshielded core charge force of any particle (hence unification at high energy where the polarised vacuum is breached by energetic collisions). (The graviton I've proved to be a fiction; it is the same as the gauge boson photon of electromagnetism: it does the work of both all attractive force and a force root N times stronger which is attractive between unlike charges and repulsive between like charges. N is the number of charges exchanging force mediator radiation. This proves why the main claim of string theory is entirely false. There is no separate graviton.)

The virtual quarks as you say contribute to the (1) mass and (2) magnetic moment of the nucleon. In the same way, virtual electrons increase the magnetic moment of the electron by 0.116% in QED. QCD just involves a larger degree of perturbation due to the aether than QED does.

Because the force mediators in QCD interact with the virtual quarks of the vacuum appreciably, the Feynman diagrams indicate a very large number of couplings with similar coupling strengths in the vacuum that are almost impossible to calculate. The only way to approach this problem is to dump perturbative field theory and build a heuristic semi-classical model which is amenable to computer solution. Ie, you can simulate quarks and polarised clouds of virtual charges surrounding them using a classical model adjusted to allow for the relative quantum mechanical lifetimes of the various virtual particles, etc. QCD has much larger perturbative effects due to the vacuum, with the vacuum in fact contributing most of the properties attributed to nucleons. In the case of a neutron, you would naively expect there to be zero magnetic moment because there is no net charge, but in fact there is a magnetic moment about two thirds the size of the proton's.

Ultimately what you have to do, having dealt with gravity (at least showing electrogravity to be a natural consequence of the big bang), is to understand the Standard Model. In order to do that, the particle masses and force coupling strengths must be predicted. In addition, you want to understand more about electroweak symmetry breaking, gluons, the Higgs field - if it actually exists as is postulated (it may be just a false model based on ignorant speculation, like graviton theory) - etc. I know string theory critic Dr Peter Woit (whose book Not Even Wrong - The Failure of String Theory and the Continuing Challenge to Unify the Laws of Physics will be published in London on 1 June and in the USA in September) claims in Quantum Field Theory and Representation Theory: A Sketch (2002),http://arxiv.org/abs/hep-th/0206135 that it is potentially possible to deal with electroweak symmetry without the usual stringy nonsense.

Hydrogen doesn’t behave as a superfluid. Helium is a superfluid at low temperatures because it has two spin ½ electrons in its outer shell that collectively behave as a spin 1 particle (boson) at low temperatures.

Fermions have half integer spin so hydrogen with a single electron will not form boson-like electron pairs. In molecular H2, the two electrons shared by the two protons don't have the opportunity to couple together to form a boson-like unit. It is the three body problem, two protons and a coupled pair of electrons, so perturbative effects continuously break up any boson like behavior.

The same happens to helium itself when you increase temperature above superfluid temperature. The kinetic energy added breaks the electron pairing to form a kind of boson. So just the Pauli exclusion principle pairing remains at higher temperature.

You have to think of the low-temperature bose-einstein condensate as the simple case, and to note that at higher temperatures chaos breaks it up. Similarly, if you heat up a magnet you increase entropy, introducing chaos by allowing the domain order to be broken up by random jiggling of particles.

Newton's Opticks & Feynman's QED book both discuss the reflection of a photon of light by a sheet of glass depends on the thickness, but the photon is reflected as if from the front face. Newton as always fiddled an explanation based on metaphysical "fits of reflection/transmission" by light, claiming that light actually causes a wave of some sort (aetherial according to Newton) in the glass which travels with the light and controls reflection off the rear surface.

Actually Newton was wrong because you can measure the time it takes for the reflection from a really thick piece of glass, and that shows the light reflects from the front face. What is happening is that energy (gauge boson electromagnetic energy) is going at light speed in all directions within the glass normally, and is affected by the vibration of the crystalline lattice. The normal internal "resonate frequency" depends on the exact thickness of the glass, and this in turn determines the probability that a light hitting the front face is reflected or transmitted. It is purely causal.

Electrons have quantized charge and therefore electric field - hardly a description of the "light wave" we can physically experiment with and measure as 1 metre (macroscopic) wavelength radio waves. The peak electric field of radio is directly proportional to the orthagonal acceleration of the electrons which emit it. There is no evidence that the vacuum charges travel at light speed in a straight line. An electron is a trapped negative electric field. To go at light speed it's spin would have to be annihilated by a positron to create gamma rays. Conservation of angular momentum forbids an electron from going at light speed as the spin is light speed and it can't maintain angular momentum without being supplied with increasing energy as the overall propagation speed rises. Linear momentum and angular momentum are totally separate. It is impossible to have an electron and positron pair going at light speed, because the real spin angular momentum would be zero, because the total internal speed can't exceed c, and it is exactly c if electrons are electromagnetic energy in origin (hence the vector sum of propagation and spin speeds - Pythagoras' sum of squares of speeds law if the propagation and spin vectors are orthagonal - implies that the spin slows down from c towards 0 as electron total propagation velocity increases from 0 towards c). The electron would therefore have to be supplied with increasing mass-energy to conserve angular momentum as it is accelerated towards c.

Quantum field theory and forces

In 1897 J. J. Thomson showed that deflected cathode rays (electrons) in an old-type TV cathode ray tube, have a fixed mass to charge ratio. When the quantum unit of charge was later measured by Milikan by using a microscope to watch tiny charged oil drops just stopped from falling due to gravity by an electric field, the mass of an electron could be calculated by multiplying the quantum unit of charge by Thomson’s mass to charge ratio!

From 1925-7, a quantum mechanical atomic theory was developed in the probability of finding an electron anyplace is proportional to the product of a small volume of space and the square of a wave-function in that small volume. Integrating this over the whole atomic space allows total probability to be ‘normalised’ to 1 unit per electron. The normalisation factor therefore allows calculation of the absolute probability of finding the electron at any place. Various complications for orbits and spin are easily included in the mathematical model.

In 1929, Dirac found a way to combine the quantum mechanical (wave statistics) with relativity for an electron, and the equation had two solutions. Dirac predicted ‘antimatter’ using the extra solution. The anti-electron, the positron, was discovered in 1932.

From 1943-9, Feynman and others worked on the problem of calculating how the electron interacts with its own field, as propagated by virtual particles in the spacetime fabric. The effect is a 0.116% increase in the magnetic moment calculated by Dirac. Because the field equation is continuous and can increase to infinity, a cut-off is imposed to prevent the nonsensical answer of infinity. This cut-off decided by a trick called ‘renormalisation’, which consists of subtracting the unwanted infinity. Physically, this can only be interpreted as implying that the electron is coupling up with one virtual electron in the vacuum at a time, not all of them!

WOIT AND THE STANDARD MODEL

Tony Smith’s CERN document server paper, EXT-2004-031, uses the Lie algebra E6 to avoid 1-1 boson-fermion supersymmetry: ‘As usually formulated string theory works in 26 dimensions, but deals only with bosons … Superstring theory as usually formulated introduces fermions through a 1-1 supersymmetry between fermions and bosons, resulting in a reduction of spacetime dimensions from 26 to 10. The purpose of this paper is to construct … using the structure of E6 to build a string theory without 1-1 supersymmetry that nevertheless describes gravity and the Standard Model…’

Peter Woit goes in for a completely non-string approach based on building from quantum field theory of spinors, http://www.arxiv.org/abs/hep-th/0206135.

‘… [it] should be defined over a Euclidean signature four dimensional space since even the simplest free quantum field theory path integral is ill-defined in a Minkowski signature. If one chooses a complex structure at each point in space-time, one picks out a U(2) [is a proper subset of] SO(4) (perhaps better thought of as a U(2) [is a proper subset of] Spin^c (4)) and … it is argued that one can consistently think of this as an internal symmetry. Now recall our construction of the spin representation for Spin(2n) as A *(C^n) applied to a ‘vacuum’ vector.

‘Under U(2), the spin representation has the quantum numbers of a standard model generation of leptons… A generation of quarks has the same transformation properties except that one has to take the ‘vacuum’ vector to transform under the U(1) with charge 4/3, which is the charge that makes the overall average U(1) charge of a generation of leptons and quarks to be zero. The above comments are … just meant to indicate how the most basic geometry of spinors and Clifford algebras in low dimensions is rich enough to encompass the standard model and seems to be naturally reflected in the electro-weak symmetry properties of Standard Model particles…

‘For the last eighteen years particle theory has been dominated by a single approach to the unification of the Standard Model interactions and quantum gravity. This line of thought has hardened into a new orthodoxy that postulates an unknown fundamental supersymmetric theory involving strings and other degrees of freedom with characteristic scale around the Planck length. …It is a striking fact that there is absolutely no evidence whatsoever for this complex and unattractive conjectural theory. There is not even a serious proposal for what the dynamics of the fundamental ‘M-theory’ is supposed to be or any reason at all to believe that its dynamics would produce a vacuum state with the desired properties. The sole argument generally given to justify this picture of the world is that perturbative string theories have a massless spin two mode and thus could provide an explanation of gravity, if one ever managed to find an underlying theory for which perturbative string theory is the perturbative expansion.’ – Dr P. Woit, Quantum Field Theory and Representation Theory: A Sketch (2002), http://arxiv.org/abs/hep-th/0206135.

Heuristic explanation of QFT

The problem is that people are used to looking to abstruse theory due to the success of QFT in some areas, and looking at the data is out of fashion. If you look at history of chemistry there were particle masses of atoms and it took school teachers like Dalton and a Russian to work out periodicity, because the bigwigs were obsessed with vortex atom maths, the ‘string theory’ of that age. Eventually, the obscure school teachers won out over the mathematicians, because the vortex atom (or string theory equivalent) did nothing, but empirical analysis did stuff.

QUARKS

Like electrons, a quark core is surrounded by virtual particles, namely gluons and pairs of quarks and their anti-quarks. Because of the strong nuclear force, the virtual gluons, unlike photons, do have a strong force charge called ‘colour’ charge (to distinguish it from electric charge). This means that both the virtual gluon cloud and the overlapping cloud of quark and anti-quark pairs interfere with the forces away from the core of a quark. While there are two types of electric charge (arbitrarily named positive and negative), there are three types of nuclear colour charge (arbitrarily named red, green, and blue in quantum chromodynamics, QCD). If the quark core carried ‘red’ charge, then in the surrounding cloud of virtual quark pairs, the virtual anti-red quarks will be attracted to the red quark core, while the virtual red quarks will be repelled to a greater average distance. This effect shields the colour charge of the quark core, but the overlapping cloud of virtual gluons has colour charge and has the opposite effect. The overall effect is to diffuse the colour charge of the quark core over a volume of the surrounding virtual particle cloud. Therefore, the net colour charge decreases as you penetrate through the virtual cloud, much as the earth’s net gravity force falls if you were to go down a tunnel to the earth’s core. Thus, if quarks are collided with higher and higher energies, they will penetrate further through the virtual cloud and experience a reduced colour charge. When quarks are bound close together to form nucleons (neutrons and protons), they therefore interact very weakly because their virtual particle clouds overlap, reducing their net colour charge to a very small quantity. As these trapped quarks move apart, the net colour charge increases, increasing the net force, like stretching a rubber band! This makes it impossible for any quark to escape from a neutron or proton. Simply put, the binding energy holding quark together is more than the energy to create a pair or triad of quarks, so you can never isolate a single quark. Attempts to separate quarks by collisions require so much energy that new pairs (mesons) or triads (baryons and nucleons) of quarks are formed, instead of breaking individual quarks loose.

COLOUR CHARGES

A nucleon, that is a neutron or proton, has no overall ‘colour’ charge, because the ‘colour’ charges of the quarks within them cancel out exactly. Pairs of quarks, mesons, contain one quark with a given colour charge, and another quark with the anti-charge of that. Triads of quarks, baryons and nucleons, contain three quarks, each with a different colour charge: red (R), blue (B) and green (G). There are also Anti-colours, AR, AB, and AG. Common sense tells you that the gluons will be 9 in number: R-AR, R-AB, and R-AG, as well as B-AR, B-AB, and B-AG, and finally G-AR, G-AB, and G-AG, a 3x3 = 9 result matrix.

If you search the internet, you find a page dated 1996 by Dr James Bottomley and Dr John Baez which addresses this question: ‘Why are there eight gluons and not nine?’ They point out first that mesons are composed of quark and anti-quark pairs, and that baryons (neutrons, protons, etc.) are triads of quarks. Then they argued that the combination R-AR + B-AB + G-AG ‘must be non-interacting, since otherwise the colourless baryons would be able to emit these gluons and interact with each other via the strong force – contrary to the evidence. So there can be only eight gluons.’ Fair enough, you subtract one gluon without saying which one (!), to avoid including a general possibility that makes the colour charge false. (Why does the term ‘false epicycle’ spring to mind?) I love the conclusion they come to: ‘If you are wondering what the hell I am doing subtracting particles from each other, well, that’s quantum mechanics. This may have made things seem more, rather than less, mysterious, but in the long run I'm afraid this is what one needs to think about.’

All quantum field theories are based ultimately upon simple extensions of Dirac's mathematical work in attempting to unify special relativity with quantum mechanics in the late 1920s. People such as Dr Sheldon Glashow and Dr Gerard t'Hooft developed the framework. A quantum field theory, the 'Standard Model' [gauge groups SU(3) x SU(2) x U(1)] is built on a unitary group, U(1), as well as two symmetry-unitary groups, SU(2) and SU(3).

U(1) describes electric charge (having a single vector field or gauge boson, the photon). Because bosons are spin 1, the force can be attractive or repulsive, depending on the signs of the charges. (To have a charge which is always positive or attractive, like gravity, would require a spin 2 boson which is why the postulated quantum gravity boson, the unobserved graviton, is supposed to have a spin of 2.)

SU(2) allows left handed fields to form doublets, while left handed fields in SU(3) allows triplets of quarks (baryons like neutron and proton) and singletons (leptons like electron and muon) to form. The right handed fields are the same for SU(3) but only form a pair of two singlets (mesons) for SU(2).

To work, mass must be provided by an uncharged massive particle, the 'Higgs field boson'. SO(3) is another symmetry group which describes the conservation of angular momentum for 3 dimensional rotations. Is the Standard Model a worthless heap of trash, as it requires the existence of an unobserved Higgs field to give rise to mass? No, it is the best available way of dealing with all available physics data, and the Higgs field is implied as a type of ether. If you see an inconsistency between the use of special relativity in quantum field theory and the suggestion that it implies an ether, you need to refresh yourself on the physical interpretation of general relativity, which is a perfect fluid (ether/spacetime fabric) theory according to Einstein. General relativity requires an additional postulate to those of special relativity (which is really a flat earth theory, as it goes not allow for curved geodesics or gravity!), but gives rise to the same mathematical transformations as special relativity.

Spin in quantum field theory is described by ‘spinors’, which are more sophisticated than vectors. The story of spin is that Wolfgang Pauli, inventor of the phrase ‘not even wrong’, in 1924 suggested that an electron has a ‘two-valued quantum degree of freedom’, which in addition to three other quantum numbers enabled him to formulate the ‘Pauli exclusion principle’. (I use this on my home page to calculate how many electrons are in each electron shell, which produces the basic periodic table.)

Because the idea is experimentally found to sort out chemistry, Pauli was happy. In 1925, Ralph Kronig suggested that the reason for the two degrees of freedom: the electron spins and can be orientated with either North Pole up or South Pole up. Pauli initially objected because the amount of spin would give the old spherical model of the electron (which is entirely false) an equatorial speed of 137 times the speed of light! However, a few months later two Dutch physicists, George Uhlenbeck and Samuel Goudsmith, independently published the idea of electron spin, although they got the answer wrong by a factor (the g-factor) of 2.00232 (this is just double the 1.00116 factor for the magnetic moment of the electron). The first attempt to explain away this factor of 2 was by Llewellyn Thomas and was of the abstract variety (put equations together and choose what you need from the resulting brew). It is called the ‘Thomas precession’. Spin-caused magnetism had already been observed as the anomalous Zeeman effect (spectral line splitting when the atoms emitting the light are subjected to an intense magnetic field). Later the Stern-Gerlach experiment provided further evidence. It is now known that the ordinary magnetism of iron bar magnets and magnetite is derived from electron spin magnetism. Normally this cancels out, but in iron and other magnetic metals it does not completely out in each atom, and this fact allows magnets. Anyway, in 1927 Pauli accepted spin, and introduced the ‘spinor’ wave function. In 1928, Dirac introduced special relativity to Pauli’s spinor, resulting in ‘quantum electrodynamics’ that correctly predicted antimatter, first observed in 1932.

The Special Orthogonal group in 3 dimensions, or SO(3), allows spinors. It is traced back to Sophus Lie who in 1870 introduced special manifolds to study the symmetries of differential equations. The Standard Model, symmetry unitary groups SU(3)xSU(2)xU(1) is a development and application of spinor mathematics to physics. SU(2) is not actually the weak nuclear force despite having 3 gauge bosons. The weak force arises from the mixture SU(2)xU(1), which is of course the electroweak theory. Although U(1) described aspects of electromagnetism and SU(2) aspects of the weak force, the two are unified and should be treated as a single mix, SU(2)xU(1). Hence there are 4 electroweak gauge bosons, not 1 or 3. One whole point of the Higgs field mechanism is that it is vital to shield (attenuate) some of those gauge bosons, so that they have a short range (the weak force), unlike electromagnetism.

On the other hand, for interactions of very high energy, say 100-GeV, the weak force influence SU(2) vanishes and SU(3)xU(1) takes over, so the strong nuclear force and electromagnetism then dominate.

History of quantum field theory

‘I must say that I am very dissatisfied with the situation, because this so-called ‘good theory’ does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it turns out to be small – not neglecting it when it is infinitely great and you do not want it! … Simple changes will not do … I feel that the change required will be just about as dramatic as the passage from the Bohr theory to quantum mechanics.’ – Paul A. M. Dirac, lecture in New Zealand, 1975 (quoted in Directions in Physics).

The following list of developments is excerpted from a longer one given in Dr Peter Woit’s notes on the mathematics of QFT (available as a PDF from his home page). Dr Woit says:

‘Quantum field theory is not a subject which is at the point that it can be developed axiomatically on a rigorous basis. There are various sets of axioms that have been proposed (for instance Wightman’s axioms for non-gauge theories on Minkowski space or Segal’s axioms for conformal field theory), but each of these only captures a limited class of examples. Many quantum field theories that are of great interest have so far resisted any useful rigorous formulation. …’

He lists the major events in QFT to give a sense of chronology to the mathematical developments:

Dr Chris Oakley has an internet site about renormalisation in quantum field theory, which is also an interest of Dr Peter Woit. Dr Oakley starts by quoting Nobel Laureate Paul A.M. Dirac’s concerns in the 1970s:

‘[Renormalization is] just a stop-gap procedure. There must be some fundamental change in our ideas, probably a change just as fundamental as the passage from Bohr’s orbit theory to quantum mechanics. When you get a number turning out to be infinite which ought to be finite, you should admit that there is something wrong with your equations, and not hope that you can get a good theory just by doctoring up that number.’

The Nobel Laureate Richard P. Feynman did two things, describing the accuracy of the prediction of the magnet moment of leptons (electron and muon) and Lamb shift, and two major problems of QFT, namely ‘renormalisation’ and the unknown rationale for the ‘137’ electromagnetic force coupling factor:

‘… If you were to measure the distance from Los Angeles to New York to this accuracy, it would be exact to the thickness of a human hair. That’s how delicately quantum electrodynamics has, in the past fifty years, been checked … I suspect that renormalisation is not mathematically legitimate … we do not have a good mathematical way to describe the theory of quantum electrodynamics … the observed coupling … 137.03597 … has been a mystery ever since it was discovered … one of the greatest damn mysteries …’ – QED, Penguin, 1990, pp. 7, 128-9.

Dr Chris Oakley writes: ‘… I believe we already have all the ingredients for a compact and compelling development of the subject. They just need to be assembled in the right way. The important departure I have made from the ‘standard’ treatment (if there is such a thing) is to switch round the roles of quantum field theory and Wigner’s irreducible representations of the Poincaré group. Instead of making quantising the field the most important thing and Wigner’s arguments an interesting curiosity, I have done things the other way round. One advantage of doing this is that since I am not expecting the field quantisation program to be the last word, I need not be too disappointed when I find that it does not work as I may want it to.’

Describing the problems with ‘renormalisation’, Dr Oakley states: ‘Renormalisation can be summarised as follows: developing quantum field theory from first principles involves applying a process known as ‘quantisation’ to classical field theory. This prescription, suitably adapted, gives a full dynamical theory which is to classical field theory what quantum mechanics is to classical mechanics, but it does not work. Things look fine on the surface, but the more questions one asks the more the cracks start to appear. Perturbation theory, which works so well in ordinary quantum mechanics, throws up some higher-order terms which are infinite, and cannot be made to go away.

‘This was known about as early as 1928, and was the reason why Paul Dirac, who (along with Wolfgang Pauli) was the first to seriously investigate quantum electrodynamics, almost gave up on field theory. The problem remains unsolved to this day. Perturbation theory is done slightly differently, using an approach based on the pioneering work of Richard Feynman, but, other than that, nothing has changed. One seductive fact is that by pretending that infinite terms are not there, which is what renormalisation is, the agreement with experiment is good. … I believe that our failure to really get on top of quantum field theory is the reason for the depressing lack of progress in fundamental physics theory. … I might also add that the way that the whole academic system is set up is not conducive to the production of interesting and original research. … The tone is set by burned-out old men who have long since lost any real interest and seem to do very little other than teaching and politickering. …’

Actually, the tragedy started with two rival approaches to the development of Isaac Newton’s gravitational theory were almost simultaneously proposed, one mathematical horses*** which became popular, and the other the LeSage physical mechanism which was ignored. The mathematical horses*** ‘theory’ was proposed by Jesuit theologian Roger J. Boscovich, a Fellow of the Royal Society, in 1758, Theory of Natural Philosophy. This ‘theory’ was just a kind of distorted sine-wave curve, with a crackpot claim that it showed numerically how the unexplained force of nature ‘oscillates’ between attraction and repulsion with increasing distance between ‘points of matter’. This started the cult pseudo-science of guessed non-theoretical crackpot stuff that has led to 11 dimensional M-theory (it might one day be defendable mathematically after a lot more work, but there is no evidence, and even if it is right, it can’t predict forces, let alone particle masses). Whenever Boscovich’s ‘force’ (line on a graph) crossed over from ‘attraction’ to ‘repulsion’, there was a supposedly stable point where things could be stable, like molecules, water drops, and planets; without collapsing or exploding. This led Einstein to do the same to keep the universe ‘static’ with the cosmological constant, which he later admitted was his ‘biggest blunder’. The cosmological constant makes gravity zero at the distance of the average separation between galaxies, simply by making gravity fall off faster than the inverse square law, become zero at the galactic interspersion distance, and become repulsive at greater distances. However, Einstein was not merely wrong to follow Boscovich because of the lack of gravitational mechanism and the 1929 evidence for the big bang, but also because even neglecting these, the solution would not work. There is no stability in such a solution, since the nature of the hypothetical force when crossing-over from attraction to repulsion is to magnify any slight motion, enhancing instability, so there is no real stability. Hence it is entirely fraudulent, both scientifically and mathematically.

In article 111 of his Treatise on Electricity and Magnetism, 1873, Maxwell says: ‘When induction is transmitted through a dielectric [like space or glass or plastic], there is in the first place a displacement of electricity…’

Catt seems to question the details of this claim here: http://www.ivorcatt.org/icrwiworld78dec1.htm and http://www.ivorcatt.org/icrwiworld78dec2.htm. Maxwell imagined that volume is filled with a physical space, a sea of charge that becomes polarised in the space-filled gap between two plates of a capacitor. This analogy (which is from chemical electrolysis – electroplating and battery reactions) ignores the mechanism by which the capacitor charges up. But weirdly, as we now know from evidence in QED, the ‘ether’ really does contain virtual charges that get polarised around the electron core. This shields the electric core/strong nuclear force by a factor of 137 to give a force 137 times weaker than the strong nuclear force, electric force. (Dirac used the sea of virtual particles to help him visually in using his equations to predict antimatter, which is weird, since Dirac was relying on an ether theory to unify quantum theory and special relativity, which most people think says there is no ether! Dirac was ethical enough that he later published in Nature that there is definitely an ‘aether’, which helped his arcane reputation no end!)

It is interesting that Dirac’s conceptual model of the ether for the pair-production (matter-antimatter production when a suitably energetic gamma ray is distorted by the strong field near a nucleus) may be wrong, just as Maxwell’s aether model for the charging capacitor is wrong. The equations can appear right experimentally under some conditions, but it is possible that reality is slightly different to what first appears to be the case. The first real problem with Maxwell’s theory arose in 1900, with Planck’s quantum theory, which predicted the spectrum of light properly, unlike Maxwell’s theory. Planck’s theory is steps or quanta, named ‘photons’. This is far from the continuous emission of radiation predicted by Maxwell.

‘Our electrical theory has grown like a ramshackle farmhouse which has been added to, and improved, by the additions of successive tenants to satisfy their momentary needs, and with little regard for the future.’ – H.W. Heckstall-Smith, Intermediate Electrical Theory, Dent, London, 1932, p283.

‘a) Energy current can only enter a capacitor at the speed of light.

‘b) Once inside, there is no mechanism for the energy current to slow down below the speed of light…’

Ivor Catt, Electromagnetism 1, Westfields Press, St Albans, 1995, p5.

In the New Scientist, Professor Leonard Susskind is quoted as having said he wants to outlaw all use of the word ‘real’ in physics [metaphysics?]. Why not apply to have string theory receive recognised religious status, so it is protected from the fact it can’t make testable predictions? Freedom does not extend to criticisms of religious faiths, so then ‘heretics’ will be burned alive or imprisoned for causing a nuisance.

People like Josephson take the soft quantum mechanical approach of ignoring spin, assuming it is not real. (This is usually defended by confusing the switch over from the time-dependent to time-independent versions of the Schrodinger equation when a measurement is taken, which defends a metaphysical requirement for the spin to remain indeterminate until the instant of being measured. However, Love of California State Uni proves that this is a mathematical confusion between the two versions of Schrodinger's equation and is not real physical phenomena. It is very important to be specific where the errors in modern physics are, because most of it is empirical data from nuclear physics. Einstein's special relativity isn't worshipped for enormous content, but for fitting the facts. The poor presentation of it as being full of amazing content is crazy. It is respected by those who understand it because it has no content and yet produces empirically verifiable formulae for local mass variation with velocity, local time - or rather uniform motion - rate variation, E=mc2 etc. Popper's analysis of everything is totally bogus; he defends special relativity as being a falsifiable theory which it isn't as it was based on empirical observations; special relativity is only a crime for not containing a mechanism and for not admitting the change in philosophy. Similarly, quantum theory is correct so far as the equations are empirically defensible to a large degree of accuracy, but it is a crime to use this empirical fit to claim that mechanisms or causality don't exist.)

The photon certainly has electromagnetic energy with separated negative and positive electric fields. Question is, is the field the cause of charge or vice-versa? Catt says the field is the primitive. I don't like Catt's arguments for the most part (political trash) but he has some science mixed in there too, or at least Dr Walton (Catt's co-author) does. Fire 2 TEM (transverse electromagnetic) pulses guided by two conductors through one another from opposite directions, and there is no measurable resistance while the pulses overlap. Electric current ceases. The primitive is the electromagnetic field.

To model a photon out of an electron and a positron going at light velocity is false for the reasons I've given. If you are going to say the electron-positron pairs in the vacuum don't propagate with light speed, you are more sensible, as that will explain why light doesn't scatter around within the vacuum due to the charges in it hitting other vacuum charges, etc. But then you are back to a model in which light moves like transverse (gravity) surface waves in water, but with the electron-positron ether as the sea. You then need to explain why light waves don't disperse. In a photon in the vacuum, the peak amplitude is always the same, no matter how far it goes from its source. Water waves, however, lose amplitude as they spread. Any pressure wave that propagates (sound, sonar, water waves) have an excess pressure and a rarefaction (under-normal pressure) component. If you are going to claim instead that a displacement current in the aether is working with Faraday's law to allow light propagation, you need to be scientific and give all the details, otherwise you are just repeating what Maxwell said - with your own gimmicks about dipoles and double helix speculation - 140 years ago.

Maxwell's classical model of a light wave is wrong for several reasons.

Maxwell said light is positive and negative electric field, one behind the other (variation from positive to negative electric field occurring along the direction of propagation). This is a longitudinal wave, although it was claimed to be a transverse wave because Maxwell's diagram, in TEM (Treatise on Electricity & Magnetism) plots strength of E field and B field on axes transverse to the direction of propagation. However, this is just a graph which does not correspond to 3 dimensional space, just to fields in 1 dimensional space, the x direction. The axes of Maxwell's light wave graph are x direction, E field strength, B field strength. If Maxwell had drawn axes x, y, z then he could claim to have shown a transverse wave. But he didn't, he had axes x, E, B: one dimensional with two field amplitude plots.

Heaviside's TEM wave guided by two conductors has the negative and positive electric fields one beside the other, ie, orthagonal to the direction of propagation. This makes more sense to me as a model for a light wave: Maxwell's idea of having the different electric fields of the photon (positive and negative) one behind the other is bunk, because both are moving forward at light speed in the x direction and so cannot influence one another (without exceeding light speed).

Maxwell’s ‘displacement current’ law is equivalent to Dirac’s and Schroedinger’s time-dependent equations, and all are statements of the energy exchange processes described by Feynman’s path integrals.

A random distribution of an even number N of electric charges of each sign in the universe constitutes ½ N vacuum dielectric capacitors, with two different vector sum voltage forces, the first force being weak and always attractive and the second being 10N/2 times stronger and either attractive/repulsive.

The vacuum polarisation shields the electron core charge by the 137 factor.

Energy exchange by vacuum matter particles and radiation give rise to inverse-square law forces with limited range and with infinite range, respectively.

Unification of forces occurs at high energy without SUSY (super-symmetry) because all vector boson energy is conserved, so unification naturally occurs where the polarised vacuum barrier breaks down.

There are three fundamental electric charges but only one fixed fundamental particle mass in the universe, and all observed masses are obtained by the shielding combinations through which charges couple to the mass particles in the polarised vacuum.

The cause for all fundamental forces, as well as the contraction term in relativity equations, is a physical resultant of the cosmological expansion so general relativity is a localised resultant effect of the surrounding expansion of the universe, and is not to be considered a model for that expansion.

The cause of the initial cosmological inflation and continuing expansion is the similar in mechanism to the way impacts between gas molecules cause recoil and a net expansion of gas in a balloon. At various times past the actual impact mechanisms included material impacts of fundamental particles, actual gas, real quantum radiation, and more recently just the net recoil from vector boson exchange.

There are six mathematically distinguishable space-time dimensions, three of which describe the non-expanding or even contracted materials and local space-time, and three describing the expanding the space-time universe. Time (non-random motion) and increasing entropy (net energy loss to space, hence the possibility of using energy in a non-equilibrium setting to do useful work) occur because cosmological expansion prohibits the establishment of an equilibrium and makes the winding down of the universe by heat death (uniform temperature) impossible.

Use of quantum electrodynamics physics to predict the magnetic moment of an electron, without employing arbitrary renormalisation

Existing mathematical physics is generally correct but covers up the basis for a deep natural simplicity

The Wild West media is too chicken to tell you, owing to science-religious fascism

Many prefer to treat science as a brand of religion and to dismiss empirically based facts as personal pet theories. (Instead, many believe in speculative ‘mainstream’ schemes.) Since 2004, updates, revisions and improvements have been published on the internet. From 1996-2004 they were published in technical journals. If you dismiss the facts because you want to call them a particular person’s pet theory, or because you have a religious style belief in a ‘mainstream’ political-style ‘theory’ like string speculation, you may be a fascist, but are not scientific.

Maxwell died believing that radiation travels through the vacuum because there is virtual charge in space to form a displacement current, so that Ampere’s law completes the electromagnetic cycle of Faraday’s law of induction. I’ve not seen anybody refute this.

Dirac predicted antimatter from a vacuum sea of electrons. Knock one out and you create a hole which is a positron. Einstein chucked out SR when he developed GR:

‘… the law of the constancy of the velocity of light. But … the general theory of relativity cannot retain this law. On the contrary, we arrived at the result according to this latter theory, the velocity of light must always depend on the coordinates when a gravitational field is present.’ - Albert Einstein, Relativity, The Special and General Theory, Henry Holt and Co., 1920, p111.

‘… the principle of the constancy of the velocity of light in vacuo must be modified, since we easily recognise that the path of a ray of light … must in general be curvilinear…’ - Albert Einstein, The Principle of Relativity, Dover, 1923, p114.

‘The special theory of relativity … does not extend to non-uniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant). …’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916.

‘According to the general theory of relativity space without ether is unthinkable.’ – Albert Einstein, Sidelights on Relativity, Dover, New York, 1952, p23.

‘The Michelson-Morley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus…. The great stumbing-block for a philosophy which denies absolute space is the experimental detection of absolute rotation.’ – Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919), Space Time and Gravitation: An Outline of the General Relativity Theory, Cambridge University Press, Cambridge, 1921, pp. 20, 152.

This gravity mechanism also predicts all observable particle masses and other observables, and eliminates most of the unobserved ‘dark matter’ speculation and the need for a cosmological constant / dark energy (the latest data suggest that the ‘cosmological constant’ and dark energy epicycle would need to vary with time).

There is little ‘dark matter’ around because the ‘critical density’ in general relativity is out by a factor e3/2 = 10.

The falsity of ‘dark energy’: since gravity is a response to the surrounding matter, distant galaxies in the explosion are not slowed down by gravity, so the supernova data showing an ‘acceleration’ offsetting fictional gravitational retardation (i.e., showing no departure from the Hubble law) was published and predicted in 1996, before observation confirmed it (because it was suppressed, the observations are firce-fitted to a fraudulent Ptolemic epicycle-type farce instead).

‘As far as explaining what the dark energy is, I certainly won’t kid you, I have no idea! (Likewise inflation.) I’m extremely interested in alternatives, including modified gravity and back-reaction of perturbations, and open-minded about different candidates for dark energy itself.’ - Sean

Look, Phil Anderson’s comment is exactly the correct prediction made via the October 1996 issue of Electronics World, which was confirmed experimentally two years later by Perlmutter’s observations.

The lack of deceleration is because the expansion causes general relativity:

This existing paradigm tries to take general relativity (as based on local observations, including Newtonian gravity as a limit) to the universe, and force it to fit.

The reality is that gravity and contraction (general relativity) are predicted accurately from the big bang dynamics in a quantum field theory and spacetime context. There is nothing innovative here, it’s old ideas which have been ignored.

As Anderson says, the universe is ‘just not decelerating, it isn’t really accelerating’, and that’s due to the fact that the gravity is a proved effect surrounding expansion:

This isn’t wrong, it’s been carefully checked by peer-reviewers and published over 10 years. This brings up Sean’s point about being interested in this stuff. It’s suppressed, despite correct predictions of force strengths, because it doesn’t push string theory. Hence it was even removed from arXiv after a few seconds (without being read). There is no ‘new principle’, just the existing well-known physical facts applied properly.

The Standard Model is the best tested physical theory in history. Forces are due to radiation exchange in spacetime. The big bang has speed from 0 to c with spacetime of 0 toward 15 billion years, giving outward force of F = ma = mc/t. Newton’s 3rd law gives equal inward force, carried by gauge bosons, which are shielded by matter,

The mainstream approach is to take GR as a model for the universe, which assumes gravity is not a QFT radiation pressure force.

But if you take the observed expansion as primitive, then you get a mechanism for local GR as the consequence, without the anomalies of the mainstream model which require CC and inflation.

Outward expansion in spacetime by Newton's 3rd law results in inward gauge boson pressure, which causes the contraction term in GR as well as gravity itself.

GR is best viewed simply as Penrose describes it:

(1) the tensor field formulation of Newton's law, R_uv = 4Pi(G/c^2)T_uv, and

(2) the contraction term which leads to all departures from Newton's law (apart from CC).

Putting the contraction term into the Newtonian R_uv = 4Pi(G/c^2)T_uv gives the Einstein field equation without the CC:

R_uv - ½Rg_uv = 8Pi(G/c^2)T_uv

Feynman explains very clearly that the contraction term can be considered physical, e.g., the Earth's radius is contracted by the amount ~(1/3)MG/c^2 = 1.5 mm.

This is like radiation pressure squeezing the earth on the subatomic level (not just the macroscopic surface of the planet), and this contraction in space also causes a related gravitational reduction in time, or gravity time-dilation.

This is like radiation pressure squeezing the earth on the subatomic level (not just the macroscopic surface of the planet), and this contraction in space also causes a related gravitational reduction in time, or gravity time-dilation.

The vacuum has both massive particles and radiation. The pressure from radiation exchange between mass causing Higgs field particles causes electromagnetic forces with the inverse-square law by a quantum field theory version of LeSage’s mechanism, because the radiation cannot ‘diffract’ into ‘shadows’ behind shielding particles. However, the massive particles in the vacuum are more like a gas and scatter pressure randomly in all directions, so they do ‘fill in shadows’ within a short distance, resulting in the extremely short range of the nuclear forces.

The relationship between the strength of gravity and electromagnetism comes when you analyse how the potential (voltage) adds up between capacitor plates with a vacuum dielectric, when they are aligned at random throughout space instead of being in a nice series of a circuit. You also have to understand an error in the popular interpretation of the crucial ‘displacement current’ term in Maxwell’s equation for the curl of a magnetic field (the term added to Ampere’s law ‘for mathematical consistency’): it is not the whole story.

‘We have to study the structure of the electron, and if possible, the single electron, if we want to understand physics at short distances.’ – Professor Asim O. Barut, On the Status of Hidden Variable Theories in Quantum Mechanics, Aperion, 2, 1995, pp97-8. (Quoted by Dr Thomas S. Love.)

PARTICLE MASS PREDICTIONS. The gravity mechanism implies (see analysis further on) quantized unit masses. As proved further on, the 1/alpha or ~137 factor is the electromagnetic shielding of any particle core charge by the surrounding polarised vacuum. When a mass-giving black hole (gravitationally trapped) Z-boson (this is the Higgs particle) with 91 GeV energy is outside an electron core, both its own field (it is similar to a photon, with equal positive and negative electric field) and the electron core have 137 shielding factors, and there are also smaller geometric corrections for spin loop orientation, so the electron mass is: [Z-boson mass]/(3/2 x 2.Pi x 137 x 137) ~ 0.51 MeV. If, however, the electron core has more energy and can get so close to a trapped Z-boson that both are inside and share the same overlapping polarised vacuum veil, then the geometry changes so that the 137 shielding factor operates only once, predicting the muon mass: [Z-boson mass]/(2.Pi x 137) ~ 105.7 MeV. The muon is thus an automatic consequence of a higher energy state of the electron. As Dr Thomas Love of California State University points out, although the muon doesn’t decay directly into an electron by gamma ray emission, apart from its higher mass it is identical to an electron, and the muon can decay into an electron by emitting electron and muon neutrinos. The general equation the mass of all particles apart from the electron is [electron mass].[137].n(N+1)/2 ~ 35n(N+1) Mev. (For the electron, the extra polarised shield occurs so this should be divided by the 137 factor.) Here the symbol n is the number of core particles like quarks, sharing a common, overlapping polarised electromagnetic shield, and N is the number of Higgs or trapped Z-bosons. Lest you think this is all ad hoc coincidence (as occurred in criticism of Dalton’s early form of the periodic table), remember we have a mechanism unlike Dalton, and we below make additional predictions and tests for all the other observable particles in the universe, and compare the results to experimental measurements:

‘… I do feel strongly that this [string theory] is nonsense! … I think all this superstring stuff is crazy and is in the wrong direction. … I don’t like it that they’re not calculating anything. … why are the masses of the various particles such as quarks what they are? All these numbers … have no explanations in these string theories - absolutely none! …’ – Feynman in Davies & Brown, ‘Superstrings’ 1988, at pages 194-195,

These facts on gravity above are all existing accepted orthodoxy; the Feynman diagrams are widely accepted, as is the spacetime (time after big bang decreasing with increasing observed distance), Newton’s laws of motion, geometry and applied physics are not controversial. However, have you seen this mechanism in any scientific journals? No? But you have seen string theory which predicts nothing testable and a whole load of unobservables (superpartners, supersymmetry at energy far beyond observations, 6/7 extra dimensions, strings of Planck size without any evidence, etc.)? Why? Why won’t they tell you the facts? The existing ‘string theory’ gravity is ‘speculative gibberish’: untestable

administrators of arXiv.org still won’t publish this, preferring the embarrassment of it being dismissed as a mere ‘alternative’ to mainstream (M-) theory of strings which can vaguely predict anything that is actually observable, by being non-specific. ArXiv.org say: ‘You should know the person that you endorse or you should see the paper that the person intends to submit. We don’t expect you to read the paper in detail, or verify that the work is correct, but you should check that the paper is appropriate for the subject area. You should not endorse the author … if the work is entirely disconnected with current [string theory] work in the area.’ Hence innovation is suppressed. ArXiv rely entirely on suppression via guilt by association or lack of association, and as just quoted, they don’t care whether the facts are there at all! Recent improvements here are due mainly to the influence of Woit’s good weblog. I’d also like to acknowledge encouragement from fellow Electronics World contributor A. G. Callegari, who has some interesting electromagnetic data. ‘STRING THEORY’ FINALLY FINDS A USE: EXCUSING INFIDELITY.

Peter Woit in

http://arxiv.org/abs/hep-th/0206135 put forward a conjecture: "The quantum field theory of the standard model may be understood purely in terms of the representation theory of the automorphism group of some geometric structure."

Using Lie spinors and Clifford algebras he comes up with an illustrative model on page 51, which looks as if it will do the job, but then adds the guarded comment:

"The above comments are exceedingly speculative and very far from what one needs to construct a consistent theory. They are just meant to indicate how the most basic geometry of spinors and Clifford algebras in low dimensions is rich enough to encompass the standard model and seems to be naturally reflected in the electro-weak symmetry properties of Standard Model particles."

This guarded approach needs to be contrasted to the hype surrounding string theory.

String theorists call all alternatives crackpot. Alternatives to failed mainstream ideas are not automatically wrong. Those who are censored for being before their time or for contradicting mainstream non-tested speculation, are hardly crackpot. As a case in point, see http://cdsweb.cern.ch/search.py?recid=688763&ln=en which was peer-reviewed and published but censored off arxiv according to the author (presumably for contradicting stringy speculation). It is convenient for Motl to dismiss this as crackpot by personally-abusive name-calling, giving no reason whatsoever. Even if he gave a 'reason', that whoud not mean anything, since these string theorists are downright ignorant. What Motl would have to do is not just call names, or even go to providing a straw-man type 'reason', but to actually analyse and compare alternative theories objectively to mainstream string theory. This he won't do. It is curious that nobody remembers the problems that Einstein had when practically the entire physics establishment of Germany in the 1930s was coerced by fascists to call him a crackpot. I think Pauli’s categories of "right", "wrong", and "not even wrong" are more objective than calling suggestions "crackpot".

If you live in a society where unobserved gravitons and superpartners are believed to be "evidence" that string theory unifies standard model forces and "has the remarkable property of predicting gravity" {quoted from stringy M-theory originator Edward Witten, Physics Today, Apr 96}, then your tendency to ignore it is no help. You have to point out that it is simply vacuous.

String theory lacks a specific quantum field theory vacuum, yet as Lunsford says, that doesn’t stop string theory from making a lot of vacuous "predictions".

String theory allows 10^500 or so vacua, a whole "landscape" of them, and there is no realistic hope of determining which is the right one. So it is so vague it can’t say anything useful. The word "God" has about 10^6 different religious meanings, so string theory is (10^500)/(10^6) = 10^494 times more vague than religion.

‘… I do feel strongly that this is nonsense! … I think all this superstring stuff is crazy and is in the wrong direction. … I don’t like it that they’re not calculating anything. … why are the masses of the various particles such as quarks what they are? All these numbers … have no explanations in these string theories - absolutely none! …’ - http://www.math.columbia.edu/~woit/wordpress/?p=272#comment-5295

Also note that even Dr Lubos Motl has expressed concerns with the ‘landscape’ aspect of ST, while Dr Peter Woit in his 2002 paper pointed out the problem that ST doesn’t actually sort out gravity:

‘It is a striking fact that there is absolutely no evidence whatsoever for this complex and unattractive conjectural theory. There is not even a serious proposal for what the dynamics of the fundamental ‘M-theory’ is supposed to be or any reason at all to believe that its dynamics would produce a vacuum state with the desired properties. The sole argument generally given to justify this picture of the world is that perturbative string theories have a massless spin two mode and thus could provide an explanation of gravity, if one ever managed to find an underlying theory for which perturbative string theory is the perturbative expansion.’ – Quantum Field Theory and Representation Theory: A Sketch (2002), http://arxiv.org/abs/hep-th/0206135

In addition, Sir Roger Penrose analysed the problems with string theory at a technical level, concluding: ‘in addition to the dimensionality issue, the string theory approach is (so far, in almost all respects) restricted to being merely a perturbation theory.’ - The Road to Reality, 2004, page 896.

So how did string theorists dupe the world?

'In the first section the history of string theory starting from its S-matrix bootstrap predecessor up to Susskind’s recent book is critically reviewed. The aim is to understand its amazing popularity which starkly constrasts its fleeting physical content. A partial answer can be obtained from the hegemonic ideological stance which some of its defenders use to present and defend it. The second section presents many arguments showing that the main tenet of string theory which culminated in the phrase that it represents "the only game in town" is untenable. It is based on a wrong view about QFT being a mature theory which (apart from some missing details) already reached its closure. ...

'A guy with the gambling sickness loses his shirt every night in a poker game. Somebody tells him that the game is crooked, rigged to send him to the poorhouse. And he says, haggardly, I know, I know. But its the only game in town. - Kurt Vonnegut, The Only Game in Town [13]

'This is a quotation from a short story by Kurt Vonnegut which Peter Woit recently used in one of the chapters in his forthcoming book entitled Not Even Wrong : The Failure of String Theory & the Continuing Challenge to Unify the Laws of Physics (using a famous phrase by which Wolfgang Pauli characterized ideas which either had not even the quality of being wrong in an interesting way or simply lacked the scientific criterion of being falsifiable).' - Professor Bert Schroer, arXiv:physics/0603112, p1.

'I argue that string theory cannot be a serious candidate for the Theory of Everything, not because it lacks experimental support, but because of its algebraic shallowness. I describe two classes of algebraic structures which are deeper and more general than anything seen in string theory...' - T. A. Larsson, arXiv:math-ph/0103013, p1.

'The history of science is full of beautiful ideas that turned out to be wrong. The awe for the math should not blind us. In spite of the tremendous mental power of the people working in it, in spite of the string revolutions and the excitement and the hype, years go by and the theory isn’t delivering physics. All the key problems remain wide open. The connection with reality becomes more and more remote. All physical predictions derived from the theory have been contradicted by the experiments. I don’t think that the old claim that string theory is such a successful quantum theory of gravity holds anymore. Today, if too many theoreticians do strings, there is the very concrete risk that all this tremendous mental power, the intelligence of a generation, is wasted following a beautiful but empty fantasy. There are alternatives, and these must be taken seriously.' - Carlo Rovelli, arXiv:hep-th/0310077, p20.

‘… the innovator has for enemies all those who have done well under the old conditions, and lukewarm defenders in those who may do well under the new. This coolness arises partly from fear of the opponents, who have the laws on their side, and partly from the incredulity of men, who do not readily believe in new things until they have had a long experience of them. Thus it happens that whenever those who are hostile have the opportunity to attack they do it like partisans, whilst the others defend lukewarmly…’ - http://www.constitution.org/mac/prince06.htm

‘Crimestop means the faculty of stopping short, as though by instinct, at the threshold of any dangerous thought. It includes the power of not grasping analogies, of failing to perceive logical errors, of misunderstanding the simplest arguments if they are inimical to Ingsoc, and of being bored or repelled by any train of thought which is capable of leading in a heretical direction. Crimestop, in short, means protective stupidity.’- G. Orwell, 1984, Chancellor Press, London, 1984, p225.

‘(1). The idea is nonsense. (2). Somebody thought of it before you did. (3). We believed it all the time.’ - Professor R.A. Lyttleton's summary of inexcusable censorship (quoted by Sir Fred Hoyle in ‘Home is Where the Wind Blows’ Oxford University Press, 1997, p154). Example: recent

Einstein, in his 1936 paper Physics and Reality, argued that quantum mechanics is merely a statistical means of accounting for the average behaviour of a large number of particles. In a hydrogen atom, presumably, the three dimensional wave behaviour of the electron would be caused by the interaction of the electron with the particles and radiation of the quantum mechanical vacuum or Dirac sea, which would continuously be disturbing the small-scale motion of subatomic sized particles, by analogy to the way air molecules cause the jiggling or Brownian motion of very small dust particles. Hence there is chaos on small scales due to a causal physical mechanism, the quantum foam vacuum. Because of the Poincare chaos which the electromagnetic and other fields involves in 3+ body interactions create, probability and statistics rule the small scale. Collisions of particles in the vacuum by this mechanism result in the creation of other virtual particles for a brief time until further collisions annihilate the latter particles. Random collisions of vacuum particles and unstable nuclei trigger the Poisson statistics behind exponential radioactive decay, by introducing probability. All of these phenomena are real, causal events, but like the well-known Brownian motion chaos of dust particles in air, they are not deterministic.

Love has a vast literature survey and collection of vitally informative quotations from authorities, as well as new insights from his own work in quantum mechanics and field theory. He quotes, on page 8, from Asim O. Barut's paper, On the Status of Hidden Variable Theories in Quantum Mechanics, (Aperion, 2, 1995, p97): "We have to study the structure of the electron, and if possible, the single electron, if we want to understand physics at short distances."

String theory claims to study the electron by vibrating extra dimensional strings of Planck scale, but there is not a shred of evidence for this. I'd point out that the Planck scale is meaningless since the radius of a black hole electron mass (R = 2GM/c^2) is a lot smaller than the Planck size, so why choose to speculate strings are Planck size? (Planck was only fiddling around with dimensional analysis, and falsely believed he had found the smallest possible length scale, when in fact the black hole size of an electron is a lot, lot smaller!)

On page 9, Love points out that: "The problem is that quantum mechanics is mathematically inconsistent...", which compares the two versions of the Schroedinger equation on page 10. The time independent and time-dependent versions disagree and this disagreement nullifies the principle of superposition and consequently the concept of wavefunction collapse being precipitated by the act of making a measurement. The failure of superposition discredits the usual interpretation of the EPR experiment as proving quantum entanglement. To be sure, making a measurement always interferes with the system being measured (by recoil from firing light photons or other probes at the object), but that is not justification for the metaphysical belief in wavefunction collapse.

Page 40: "There is clearly a relationship between the mass of an elementary particle and the interactions in which it participates."

On page 40 Love finds that "the present work implies that the curvature of the space-time is caused by the rotation of something..." We know the photon has spin, so can we create a spin foam vacuum from radiation (photons)? Smolin is interested in this.

Page 41: Muon as a heavy electron. Love says that "Barut argues that the muon cannot be an excited electron since we do not observe the decay muon -> electron + gamma ray." Love argues that in the equation muon -> electron + electron neutrino + muon neutrino, the neutrino pair "is essentially a photon." It does seem likely from experimental data on the properties of the electron and muon that the muon is an electron with extra energy which allows it to associate strongly with the Higgs field.

Traditionally the Higgs field is introduced into electroweak theory partly to give the neutral Z-boson (91 GeV) a limited range at low energy, compared to the infinite range of photons. Now lets look at the mainstream heuristic picture of the electron in the Dirac sea of QFT, which is OK as far as it goes, but doesn't go far enough:

Most of the charge is screened out by polarised charges in the vacuum around the electron core:'... we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum ... amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies).' - arxiv hep-th/0510040, p 71.

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

Koltick found a 7% increase in the strength of Coulomb's/Gauss' force field law when hitting colliding electrons at an energy of 80 GeV or so. The coupling constant for electromagnetism is 1/137 at low energies but was found to be 1/128.5 at 80 GeV or so. This rise is due to the polarised vacuum being broken through. We have to understand Maxwell's equations in terms of the gauge boson exchange process for causing forces and the polarised vacuum shielding process for unifying forces into a unified force at very high energy. The minimal SUSY Standard Model shows electromagnetic force coupling increasing from alpha of 1/137 to alpha of 1/25 at 10^16 GeV, and the strong force falling from 1 to 1/25 at the same energy, hence unification. The reason why the unification superforce strength is not 137 times electromagnetism but only 137/25 or about 5.5 times electromagnetism, is heuristically explicable in terms of potential energy for the various force gauge bosons. If you have one force (electromagnetism) increase, more energy is carried by virtual photons at the expense of something else, say gluons. So the strong nuclear force will lose strength as the electromagnetic force gains strength. Thus simple conservation of energy will explain and allow predictions to be made on the correct variation of force strengths mediated by different gauge bosons. When you do this properly, you may learn that SUSY just isn't needed or is plain wrong, or else you will get a better grip on what is real and make some testable predictions as a result.

It seems that the traditional role of the Higgs field in giving mass to the 91 MeV Z-boson to limit its range (and to give mass to Standard Model elementary particles) may be back-to-front. If Z-bosons can be trapped by gravity into loops, like the model for the electron, they can numerically account for mass. Think of the electron as a bare core with 137e, surrounded by a shell of polarised vacuum which reduces the core charge to e. A Z-boson, while electrically neutral as a whole, is probably an oscillating electromagnetic field like a photon, being half positive and half negative electric field. So if as a loop it is aligned side-on it can be associated with a charge, providing mass. The point of this exercise is to account for empirical recently observed coincidences of masses:

Neutral Z-boson: 91 GeV

Muon mass: 91,000/ (twice Pi times 137 shielding factor) = 105.7 MeV
=> Muon is electron core associated with a Z-boson which has a polarised shield around its own core.

Electron mass: Muon mass/(1.5 x 137) = 0.511 MeV
=> Electron is like a muon, but there are two polarised shields weakening the association (one polarised shield around electron core and one around Z-boson core).

So the Z-boson, muon, and electron masses are physically related by just multiplying by 1/137 factors, depending on how many polarised shields are involved (ie, on whether the cores of the electron and Z-boson are close enough for the polarised veils of the Dirac sea to overlap, or not). The 2Pi shielding factor above is explained as follows: the spin of a fermion is half integer, so it rotates 720 degrees (like a Mobius strip with a half turn), so the average exposed side-on loop field area is half what you would have if it had spin of 1. (The twist in a Mobius strip loop reduces the average area you see side-on, it is a simple physical explanation.) The Pi factor comes from the fact that when you look at any charged loop side-on, you are subject to a field intensity Pi times less than if you loop at the field from the loop perpendicularly.

The 1.5 factor arises as follows. The mass of any individually observable elementary particle (quarks aren't separable to I'm talking of leptons, mesons and baryons) is heuristically given by:

M = {electron mass}.{137 polarised dielectric correction factor; see below for proof that this is the shielding factor}.n(1/2 + N/2).

In this simple formula, the 137 correction factor is not needed for the electron mass, so for an electron, M = {electron mass}.n(1/2 + N/2) = {electron mass}.

Here n stands for the number of charged core particles like quarks (n = 1 for leptons, n = 2 for mesons, n = 3 for baryons), and N is the number of vacuum particles (Z-bosons) associated with the charge. I've given a similar argument for the causal mechanism of Schwinger's first corrective radiation term for the magnetic moment of the electron, 1 + alpha/(2.pi) on my page. The heuristic explanation for the (1/2 + N/2) factor could be the addition of spins.

The problem is that whatever the truth is, whether string theory or LQG, some kind of connection with reality of these numbers is needed. You have three leptons and three families of quarks. The quark masses are not "real" in the sense that you can never in principle observe a free quark (the energy needed to break a couple or traid of quarks apart is enough to form new pairs of quarks).So the real problem is explaining the observable facts relating to masses: the three lepton masses (electron, muon, tauon, respectively about 0.511, 105.66 and 1784.2 MeV, or 1/137.0..., 1.5 and 25.5 respectively if you take the 1/137.0... as the electron mass), and a large amount of hadron data on meson (2 quarks each) and baryon (3 quarks each) masses.When you multiply the masses of the hadrons by alpha (1/137.0...) and divide by the electron mass, you get, at least for the long-lived hadrons (half lives above 10^-23 second) pretty quantized (near-integer) sized masses:

Of course the exceptions are nucleons, neutrons and protons, which have both have masses on this scale of around 13.4. It is a clue to why they are relatively stable compared to all the other hadrons, which all have half lives of a tiny fraction of a second (after the neutron, the next most stable hadron found in nature is the pion of course, which has a half life of 2.6 shakes (1 shake = 10^-8 second).

All these particles masses are produced by the semi empirical formula {M ~ 35n(N + 1) Mev} above to within 2% error, which is strong statistical evidence for quantization (similar if not better than Dalton's evidence for periodicity of the elements in the early nineteenth century; note that Dalton was called a crackpot by many):

As you can see from the "periodic table" based on masses above, there are a lot of blanks. Some if not all of these are doubtless filled by the shorter-lived particles.

What needs to be done next is to try to correlate the types of quarks with the apparent integer number of vacuum particles N they associate with, in each meson and baryon. I seem to recall from a course in nuclear physics that the numbers 8 and 50 are "magic numbers" in nuclear physics, and may explain the nucleons having N = 8 and the Tauon having N = 50. This is probably the "selection principle" needed to go with the formula to identify predictions of masses of relatively stable particles. (As you comment, there is no real difference between nuclear physics and particle physics.) I know Barut made some effort to empirically correlate lepton masses in his paper in PRL, v. 42 (1979), p. 1251, and Feynman was keener for people to find new ways to calculate data than to play with string theory:

‘… I do feel strongly that this [superstring theory stuff] is nonsense! … I think all this superstring stuff is crazy and is in the wrong direction. … I don’t like it that they’re not calculating anything. … why are the masses of the various particles such as quarks what they are? All these numbers … have no explanations in these string theories - absolutely none! …’ - R.P. Feynman, quoted in Davies & Brown, ‘Superstrings’ 1988, at pages 194-195 (quotation provided by Tony Smith). The semi-empirical formula is not entirely speculative, as the shielding factor 137 can be justified as you may have seen on my pages:

Heisenberg's uncertainty says pd = h/(2.Pi), where p is uncertainty in momentum, d is uncertainty in distance.This comes from his imaginary gamma ray microscope, and is usually written as a minimum (instead of with "=" as above), since there will be other sources of uncertainty in the measurement process.For light wave momentum p = mc, pd = (mc)(ct) = Et where E is uncertainty in energy (E=mc2), and t is uncertainty in time. Hence, Et = h/(2.Pi), so t = h/(2.Pi.E), so d/c = h/(2.Pi.E)d = hc/(2.Pi.E). This result is used to show that a 80 GeV energy W or Z gauge boson will have a range of 10^-17 m. So it's OK. Now, E = Fd implies d = hc/(2.Pi.E) = hc/(2.Pi.Fd). Hence F = hc/(2.Pi.d^2). This force is 137.036 times higher than Coulomb's law for unit fundamental charges. Notice that in the last sentence I've suddenly gone from thinking of d as an uncertainty in distance, to thinking of it as actual distance between two charges; but the gauge boson has to go that distance to cause the force anyway.

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303. Note: statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum is full of gauge bosons carrying momentum like light, and exerting vast pressure; this gives the foam vacuum.

Clearly what's physically happening is that the true force is 137.036 times Coulomb's law, so the real charge is 137.036e. All the detailed calculations of the Standard Model are really modelling are the vacuum processes for different types of virtual particles and gauge bosons. The whole mainstream way of thinking about the Standard Model is related to energy. What is really happening is that at higher energies you knock particles together harder, so their protective shield of polarised vacuum particles gets partially breached, and you can experience a stronger force mediated by different particles! This is reduced by the correction factor 1/137.036 because most of the charge is screened out by polarised charges in the vacuum around the electron core.

The problem is that people are used to looking to abstruse theory due to the success of QFT in some areas, and looking at the data is out of fashion. If you look at history of chemistry there were particle masses of atoms and it took school teachers like Dalton and a Russian to work out periodicity, because the bigwigs were obsessed with vortex atom maths, the ‘string theory’ of that age. Eventually, the obscure school teachers won out over the mathematicians, because the vortex atom (or string theory equivalent) did nothing, but empirical analysis did stuff. It was eventually explained theoretically!

It seems that there are two distinct mechanisms for forces to be propagated via quantum field theory. The vacuum propagates long ranges forces (electromagnetism, gravity) by radiation exchange as discussed in earlier papers kindly hosted by Walter Babin, while short-range forces (strong and weak nuclear interactions) are due to the pressure of the spin foam vacuum. The vacuum is below viewed by analogy to an ideal gas in which there is a flux of shadowed radiation and also dispersed particle-caused pressure.

The radiation has an infinite range and its intensity decreases from geometric divergence. The material pressure of the spin foam vacuum is like an ideal gas, with a small mean-free-path, and produces an attractive force with a very short range (like air pressure pushing a suction plunger against a surface, if the gap is too small to allow air to fill the gap). The probabilistic nature of quantum mechanics is then due to the random impacts from virtual particles in the vacuum on a small scale, which statistically average out on a large scale.

There is strong evidence showing Maxwell's light photon theory is not only drivel, but can be corrected by modern data from electromagnetism. First consider what electricity is. If you charge up a x metre long transmission line to v volts, energy enters at the speed of light. When you dicharge it, you (contrary to what you may expect) get a light speed pulse out of v/2 volts with a duration of 2x/c seconds, which of course implies a pulse 2x metres long. Nobody has ever proposed a mechanism where by energy travelling at light speed can magically stop when a transmission line charges up, and magically restart when it is allowed to discharge.

Static electrons are therefore to be viewed as trapped electromagnetic field energy. Because there is no variation in voltage in a static charged conductor, there is no electric drift current, and no resistance or net magnetic field from current, yet energy is still going at light speed.

Because we know a lot about the electron, namely its electric charge in interactions at different energy, its spin, its magnetic dipole, we can use Heaviside's model of energy current to obtain a model for an electron: it's just a Heaviside-Poynting energy trapped in a loop by the only force that will do that, gravity. I discussed this in ten pages of articles in Electronics World, August 2002 and April 2003, which are both now cited on Google Scholar (despite the abuse from string theorists). This tells us the loop size is black-hole sized. This in turn allows a mechanism for LeSage gravity to be tested (although the calculations of the mechanism can also be done in another way that doesn't depend on assumed black hole sized shield area for a fundamental particle). Maxwell had no idea that electricity is related to light in speed, or he would probably have grasped that electrons are spinning at light speed:

James Clerk Maxwell, Treatise on Electricity and Magnetism, 3rd ed., Article 574:- "... there is, as yet, no experimental evidence to shew whether the electric current... velocity is great or small as measured in feet per second."

James Clerk Maxwell, Treatise on Electricity and Magnetism, 3rd ed., Article 769:- "... we may define the ratio of the electric units to be a velocity... this velocity [of light, because light was the only thing Maxwell then knew of which had a similar speed, due to his admitted ignorance of the speed of electricity ! ] is about 300,000 kilometres per second."

So Maxwell was just guessing that he was modelling light, because he didn't guess what Heaviside knew later on (1875): that electricity is suspiciously similar to what Maxwell was trying to model as "light".

More important, a photon of over 2 electron rest masses in energy interacts with heavy nuclei (of high atomic number) by pair-production. Hence a 1.022 MeV gamma ray with spin 1 can be converted into a 0.511 MeV electron of spin 1/2 and a 0.511 MeV positron of spin 1/2. Since a classical light ray is a variation in electromagnetic field, usually drawn as being half negative electric field and half positive, the direct causal model of pair production is the literal splitting or fission of a gamma ray by the curvature of spacetime in the strong field near an atomic nucleus. The two fragments gain potential energy from the field and become trapped by gravity. The wavelength of a gamma ray of >1 MeV is very small. (It's a tragedy that pair-production was only discovered in 1932, well after the Bohring revolution, not in 1922 or before.)

Additional key evidence linking these facts directly to the Standard Model is that particles in the Standard Model don't have mass. In other words, elementary particles are like photons, they have real energy but are mass-less. The mass in the Standard Model is supplied by a mechanism, the Higgs field. This model is compatible with the Standard Model. Furthermore, predictions of particle masses are possible, as discussed above.

Page 49: Love gives strong arguments that forces arise from the exchange of real particles. Clearly from my position all attractive forces in the universe are due to recoil from shielded pressure. Two nuclear particles stick together in the nucleus because they are close enough to partly shield each other form the vacuum particles. If they are far apart, the vacuum particles completely fill the gap between them, killing the short range forces completely. Gravity and electromagnetism are different in that the vector bosons don't interact or scatter off one another, but just travel in straight lines. Hence they simply cannot disperse into LeSage "shadows" and cancel out, which is why they only fall by the inverse square law, unlike material-carried short-range nuclear forces.

P. 51: Love quotes a letter from Einstein to Schrodinger written in May 1928; 'The Heisenberg-Bohr tranquilizing philosophy - or religion? - is so delicately contrived that, for the time being, it provided a gentle pillow for the true believer from which he cannot easily be aroused. So let him lie there.'

P. 52: "Bohr and his followers tried to cut off free enquiry and say they had discovered ultimate truth - at that point their efforts stopped being science and became a revealed religion with Bohr as its prophet." Very good. Note the origin of Bohr's paranoid religion is Maxwell classical equations (saying centripetally accelerated charge in atom radiate continuously, so charge spirals into the nucleus, a fact which Bohr was unable to resolve when Rutherford wrote to him about in in 1915 or so). Being unable to answer such simple questions, Bohr simply resorted to inventing a religion to make the questions a heresy. (He also wanted his name in lights for all time.)

P. 55: excellent quotation from Hinton! But note that vortex theory of atom was never applied to electrons; it was heresy when discovery of radioactivity "disproved it".

Although the application of GR by the 'cosmological constant' fiddles to the big bang is a repeated failure of predictions for decades, as new data arises, the basic observed Hubble law of big bang expansion, nuclear reaction rates, etc., are OK. So only part of GR is found wanting!

P. 72: "In order to quantize charge, Dirac had to postulate the existence of magnetic monopoles." Love points out that magnetic monopoles have never been found in nature. Heaviside, not Maxwell, first wrote the equation div.B = 0, which is logical and only permits magnetic dipoles! Hence it is more scientific to search for UFOs than magnetic monopoles.

P. 93: interesting that Love has the source for the origin of the crackpot claim that radioactive events have no cause as Gurney and Condon 1929. I will get hold of that reference to examine in detail if a satire can be made of their argument. But I suppose Schrodinger did that in 1935, with his cat paradox?

P. 94: Particles and radiation in the vacuum create the random triggers for radioactive decays. Some kind of radiation or vacuum particles, triggers decays statistically. Love gives arguments for neutrinos and their antiparticles being involved in triggering radioactivity, but that would seem to me to only account for long half lives, where there is a reasonable chance of an interaction with a neutrino, and not for short half lives where the neutrino/antineutrino flux in space is too small, and other vacuum particles are more likely to trigger decays (vector bosons, or other particles in the quantum foam vacuum). The more stable a nuclide is, the less likely it is that an impact will trigger a decay, but due to chaotic collisions there is always some risk. I agree with Love that quantum tunnelling is not metaphysical (p. 95), but due to real vacuum interactions.

The problem is that to get a causal mechanism for radioactive decay triggering taken seriously, some statistical calculations and hopefully predictions are needed, and that before you do that you might want to understand the masses of elementary particles and how the exact mass affects the half life of the particle. Probably it is a resonance problem. I know the Standard Model does predict a lot of half lives, but I've only studied radioactivity in nuclear physics so far, not particle physics in depth.

P. 99: "It is interesting ... when a philosopher ... attacked quantum field theory, the response was immediate and vicious. But when major figures from within physics, like Dirac and Schwinger spoke, the critics were silent." Yes, and they were also polite to Einstein when he spoke, but called him an old fool behind his back.

P. 106: O'Hara quotation "Bandwagons have bad steering, poor brakes, and often no certificate of roadworthiness."

The vector boson radiation of QFT works by pushing things together. ‘Caloric’, fluid heat theory, eventually gave way to two separate mechanisms, kinetic theory and radiation. This was after Prevost in 1792 suggested constant temperature is a dynamic system, with emission in equilibrium with the reception of energy. The electromagnetic field energy exchange process is not treated with causal mechanism in current QFT, which is the cause of all the problems. All attractive forces are things shielding one another and being pushed together by the surrounding radiation pushing inward where not shadowed, while repulsion is due to the fact that in mutual exchange of energy between two objects which are not moving apart, the vector bosons are not red-shifted, whereas those pressing in on the far sides are red-shifted by the big bang, as they come from immense distances. I've a causal mechanism which works for each fundamental force, although it is still sketchy in places.

P. 119: "In the Standard Model, the electron and the neutrino interact via the weak force by interchanging a Z. But think about the masses ... Z is about 91 GeV". I think this argument of Love's is very exciting because it justifies the model of masses above: the mass-causing Higgs field is composed of Z particles in the vacuum. It's real.

P. 121: Matter is trapped electromagnetic field energy: this is also justified by the empirical electromagnetic data I've been writing about for a decade in EW.

Spin-spin interaction (Pauli exclusion force?) is clearly caused by some kind of magnetic anti alignment or pairing. When you drop two magnets into a box, they naturally pair up not end to end, but side by side, with the north poles pointing in opposite directions. This is the most stable situation. The same happens to electrons in orbits, they are magnets so they generally pair up with opposite orientation to the their neighbour. Hence Pauli's law for paired electrons.

'When one looks back over the development of physics, one sees that it can be pictured as a rather steady development with many small steps and superimposed on that a number of big jumps.... These big jumps usually consist in overcoming a prejudice.'

RELATIONSHIP OF CAUSAL ELECTRIC FORCE FIELD FORCE MECHANISM TO GRAVITY MECHANISM AND MAGNETIC FORCE FIELD

It seems that the electromagnetic force-carrying radiation is also the cause
of gravity, via particles which cause the mass of charged elementary
particles.

The vacuum particles ("higgs particle") that give rise to all mass in the
Standard Model haven't been observed officially yet, and the official
prediction of the energy of the particle is very vague, similar to the Top
Quark mass, 172 GeV. However, my argument is that the mass of the uncharged
Z-boson, 91 GeV, determines the masses of all the other particles. It
works. The charged cores of quarks, electrons, etc., couple up (strongly or
weakly) with a discrete number of massive trapped Z-bosons which exist in
the vacuum. This mechanism also explains QED, such as the magnetic moment
of the electron 1 + alpha/(2Pi) magnetons.

Literally, the electromagnetic force-causing radiation (vector bosons)
interact with charged particle cores to produce EM forces, and with the
associated "higgs bosons" (gravitationally self-trapped Z-bosons) to produce
the correct inertial masses and gravity for each particle.

The lepton and hadron masses are quantized, and I've built a model,
discussed there and on my blog, which takes this model and uses it to
predict other things. I think this is what science is all about. The
mainstream (string theory, CC cosmology) is too far out, and unable to make
any useful predictions.

As for the continuum: the way to understand it is through correcting
Maxwell's classical theory of the vacuum. Quantum field theory accounts for
electrostatic (Coulomb) forces vaguely with a radiation-exchange mechanism.
In the LeSage mechanism, the radiation causing Coulomb's law causes all
forces by pushing. I worked out the mechanism by which electric forces
operate in the April 2003 EW article; attraction occurs by mutual shielding
as with gravity, but is stronger due to the sum of the charges in the
universe. If you have a series of parallel capacitor plates with different
charges, each separated by a vacuum dielectric, you need the total (net)
voltage needs to take into account the orientation of the plates.

The vector sum is the same as a statistical random walk (drunkard's walk):
the total is equal to the average voltage between a pair of plates,
multiplied by the square root of the total number (this allows for the
angular geometry dispersion, not distance, because the universe is
spherically symmetrical around us - thank God for keeping the calculation
very simple! - and there is as much dispersion outward in the random walk as
there is inward, so the effects of inverse square law dispersions and
concentrations with distance both exactly cancel out).

Gravity is the force that comes from a straight-line sum, which is the only
other option than the random walk. In a straight line, the sum of charges
is zero along any vector across the universe, if that line contains an
average equal number of positive and negative charges. However, it is
equally likely that the straight radial line drawn at random across the
universe contains an odd number of charges, in which case the average charge
is 2 units (2 units is equal to the difference between 1 negative charge and
1 positive charge). Therefore the straight line sum has two options only,
each with 50% probability: even number of charges and hence zero net result,
and odd number of charges which gives 2 unit charges as the net sum. The
mean for the two options is simply (0 + 2) /2 = 1 unit. Hence
electromagnetism is the square root of the number of charges in the
universe, times the weak option force (gravity).

Thus, electromagnetism and gravity are different ways that charges add up.
Electric attraction is as stated, simply a mutual blocking of EM "vector
boson" radiation by charges, like LeSage gravity. Electric repulsion is an
exchange of radiation. The charges recoil apart because the underlying
physics in an expanding universe (with "red-shifted" or at least reduced
energy radiation pressing in from the outside, due to receding matter in the
surrounding universe) means their exchange of radiation results in recoil
away from one another (imagine two people firing guns at each other, for a
simple analogy; they would recoil apart).

Magnetic force is apparently, as Maxwell suggested, due to the spins of the
vacuum particles, which line up.

There is no such thing in the world as a charge with a mass.

1. Mass

No charges have masses - the masses come from the vacuum (Higgs field or whatever explanation you prefer). This is a fact according to the well tested Standard Model. The mass you measure for the electron varies with its velocity, implying radiation resistance. Special relativity is just an approximation, general relativity is entirely different and more accurate, and and allows absolute motion (i.e., in general relativity the velocity of light depends on the absolute coordinate system, because it is bent by the spacetime fabric, but special relativity ignores this). Quantum field theory shows that the vacuum particles look different to the observer depending on the state of motion of the observer. This actually provides the mechanism for the contraction and mass increase seen in the Michelson-Morley experiment (contraction) and in particle asselerators (mass increase). In order to explain the actual variation in mass, you need a vacuum spacetime fabric theory. Mass arises due to the work needed to contract a charge in the direction of motion as you accelerate it. It's physically squashed by the radiation resistance of the vacuum, and that's where the energy resides that is needed to accelerate it. It's mass increases because it gains extra momentum from this added electromagnetic energy, which makes the charge couple more strongly to the vacuum (higgs or whatever) field particles, which provide inertia and gravity, hence mass.

2. Charge

Just as charges don't directly have mass (the mass arises from vacuum interactions) is no such thing as an electric charge (as in Coulomb's law) by itself. Electric charge always exists with light speed spin and with a dipole magnetic field. All electrons have spin and a magnetic moment. In addition, Coulomb's law is just an approximation. The electron core has an electric field strength about 137 times that implies by Coulomb's law. The nature of an electron is a transverse electromagnetic (Heaviside-Poynting) energy current trapped in a loop.

Science is not a belief system. The Maxwell div.E equation (or rather Gauss electric field or Coulomb electric force "law") is wrong because electric charge increases in high-energy collisions. It is up by something like 7% at 90 GeV collisions between electrons. The reason for this is that the polarised charges of the vacuum shield over 99% of the core charge of the electron. Again, I've gone into this at

sirius184@hotmail.com, has written a paper describing an intricate magnetic field mechanism which is not very interesting as it doesn’t make any predictions, and moreover it is probably wrong, http://www.wbabin.net/science/tombe.pdf, yet it does contain some interesting and important insights and mathematics.

It's clear from quantum electrodynamics that his basic point is correct:
the vacuum is full of charges. Normally people refuse simple models of
rotating particles in the vacuum using some arm-waving principle like the
principle of superposition, whereby the spin state of any particle is
supposed to be indeterminate until it is measured. The measurement is
supposed to collapse the wavefunction. However, Dr Thomas Love of
California State University sent me a paper showing that the principle of
superposition is just a statement of mathematical ignorance because there
are two forms of Schroedinger's equation (time dependent and time
independent), and when a measurement is taken you are basically switching
mathematical models. So superposition is just a mathematical model problem
and is not inherent in the underlying physics. So the particles in the
vacuum can have a real spin and motion even not being directly observed.

So I agree that some kind of vacuum dynamics are crucial to understanding
the physics behind Maxwell's equations. I agree that in a charged capacitor
the vacuum charges between the plates will be affected by the electric
field.

It seems to me that when a capacitor charges up, the time-variation in the
electric current flowing along the capacitor plates causes the emission of
electromagnetic energy sideways (like radio waves emitted from an aerial in
which the current applied varies with time). Therefore, the energy of
'displacement current' can be considered electromagnetic radiation similar
in principle to radio.

Maxwell's model says the changing electric field in a capacitor plate causes
displacement current in the vacuum that induces a charge on the other
plate.

In fact, the changing electric field in one plate causes a changing current
in that plate, which implies charge acceleration, which in turn causes
electromagnetic energy transmission to the other plate.

So I think Maxwell's equations cover up several intermediate physical
mechanisms. Any polarisation of the vacuum may be a result of the energy
transmission, not a cause of it.

I am very interested in your suggestion that you get a pattern of rotating
charges in a magnetic field, and in the issues of gyroscopic inertia.

I'll read your paper carefully before replying in more detail. Gyroscopes
are good toys to play with. Because they resist changes to their plane of
spin, if you let one fall while holding the axis in a pivoted way so that it
is forced to tilt in order to fall, it will appear to lose weight
temporarily. In fact what happens is gravitational potential energy is used
up doing work changing the plane of the gyroscope. Part of the
gravitational potential energy that the gyroscope is then gaining as it
falls is being used to simply change the plane of the gyroscopes spin. You
need to do work to change the plane of a spinning body, because the circular
motion of the mass implies a centripetal acceleration.

So some of the gravitational work energy (E = Fs = mgs) is used up in simply
changing the plane of the spin of the gyroscope, rather than causing the
whole thing to accelerate downward at 9.8 ms^-2. Gravity can often appear
to change because of energy conservation effects: light passing the sun is
deflected by twice the amount you'd expect from Newton's law (for slow
moving objects), because light can't speed up (unlike a slow moving object).
Because half the energy gained by a bullet passing the sun would be used
increasing the speed of the bullet and half would be used deflecting the
direction, since light cannot speed up, the entire gravitational potential
energy gained goes into deflection (hence twice the deflection implied by
Newton's law).

"You must remember though that Kirchhoff derived the EM
> wave equation using the exact same maths in 1857."

I think the maths is botched because it doesn't correspond to any physics.
Real light doesn't behave like Maxwell's light. You have to remember that
there's radiation exchange between all the charges all the time. If I have
two atoms separated by 1 metre, the charges are going to be exchanging
energy not just between nearby charges (within each atom) but with the
charges in the other atom. There is no mechanism to prevent this. The
vector bosons causing forces take all conceivable routes as Feynman showed
in the path integrals approach to quantum field theory, which is now
generally recognised as the easiest to deal with. From

It seems that the electromagnetic force-carrying radiation is also the cause
of gravity, via particles which cause the mass of charged elementary
particles.

The vacuum particles ("higgs particle") that give rise to all mass in the
Standard Model haven't been observed officially yet, and the official
prediction of the energy of the particle is very vague, similar to the Top
Quark mass, 172 GeV. However, my argument is that the mass of the uncharged
Z-boson, 91 GeV, determines the masses of all the other particles. It
works. The charged cores of quarks, electrons, etc., couple up (strongly or
weakly) with a discrete number of massive trapped Z-bosons which exist in
the vacuum. This mechanism also explains QED, such as the magnetic moment
of the electron 1 + alpha/(2Pi) magnetons.

Literally, the electromagnetic force-causing radiation (vector bosons)
interact with charged particle cores to produce EM forces, and with the
associated "higgs bosons" (gravitationally self-trapped Z-bosons) to produce
the correct inertial masses and gravity for each particle.

The lepton and hadron masses are quantized, and I've built a model,
discussed there and on my blog, which takes this model and uses it to
predict other things. I think this is what science is all about. The
mainstream (string theory, CC cosmology) is too far out, and unable to make
any useful predictions.

As for the continuum: the way to understand it is through correcting
Maxwell's classical theory of the vacuum. Quantum field theory accounts for
electrostatic (Coulomb) forces vaguely with a radiation-exchange mechanism.
In the LeSage mechanism, the radiation causing Coulomb's law causes all
forces by pushing. I worked out the mechanism by which electric forces
operate in the April 2003 EW article; attraction occurs by mutual shielding
as with gravity, but is stronger due to the sum of the charges in the
universe. If you have a series of parallel capacitor plates with different
charges, each separated by a vacuum dielectric, you need the total (net)
voltage needs to take into account the orientation of the plates.

The vector sum is the same as a statistical random walk (drunkard's walk):
the total is equal to the average voltage between a pair of plates,
multiplied by the square root of the total number (this allows for the
angular geometry dispersion, not distance, because the universe is
spherically symmetrical around us - thank God for keeping the calculation
very simple! - and there is as much dispersion outward in the random walk as
there is inward, so the effects of inverse square law dispersions and
concentrations with distance both exactly cancel out).

Gravity is the force that comes from a straight-line sum, which is the only
other option than the random walk. In a straight line, the sum of charges
is zero along any vector across the universe, if that line contains an
average equal number of positive and negative charges. However, it is
equally likely that the straight radial line drawn at random across the
universe contains an odd number of charges, in which case the average charge
is 2 units (2 units is equal to the difference between 1 negative charge and
1 positive charge). Therefore the straight line sum has two options only,
each with 50% probability: even number of charges and hence zero net result,
and odd number of charges which gives 2 unit charges as the net sum. The
mean for the two options is simply (0 + 2) /2 = 1 unit. Hence
electromagnetism is the square root of the number of charges in the
universe, times the weak option force (gravity).

Thus, electromagnetism and gravity are different ways that charges add up.
Electric attraction is as stated, simply a mutual blocking of EM "vector
boson" radiation by charges, like LeSage gravity. Electric repulsion is an
exchange of radiation. The charges recoil apart because the underlying
physics in an expanding universe (with "red-shifted" or at least reduced
energy radiation pressing in from the outside, due to receding matter in the
surrounding universe) means their exchange of radiation results in recoil
away from one another (imagine two people firing guns at each other, for a
simple analogy; they would recoil apart).

Magnetic force is apparently, as Maxwell suggested, due to the spins of the
vacuum particles, which line up.

Consider the most important and practical problem.

1. A helicopter works by spinning plades which push down the medium around
it, creating an upward reaction force (lift).
2. From quantum field theory and general relativty, there is a vacuum field,
spacetime fabric or Dirac sea/ether.

Is it possible at the fundamental particle level to use magnetism to align
electrons or protons like tiny helicopter blades, and then use them in the
same way as helicopter blades, to push the spacetime fabric and create a
reaction? This would not be violating Newton's 3rd law, because the force
the machine experiences will result in an equal and opposite reaction on the
spacetime fabric (in the same way that a helicopter forces air downwards in
order to recoil upwards against gravity). If this is possible, if the
required magnetic fields were not too large it could probably be engineered
into a practical device once the physical mechanism was understood properly.
(However electromagnetic vacuum radiation does not diffuse in all directions
like the air downdraft from a helicopter. The downdraft from a helocopter
is doesn't knock people down because it dissipates in the the atmosphere
over a large area until it is trivial compared to the normal 14.7 psi air
pressure. If it were possible to create something using vacuum force
radiation in place of air with the helicopter principle, anyone standing
directly underneath it would - regardless of the machine's altitude - get
the full weight of the the extra downward radiation just as if the
helicopter had landed on him. Such a flying device - if it were possible
and made - would leave a trail of havoc on the ground below with the same
diameter as the machine, just like a steam roller. So it would not be
practical, really. The minimum amount of energy needed would basically be
the same, because the gravitational work energy is unchanged.)

By changing the axis of rotation of a gyroscope you can temporarily create a
reaction force against the vacuum, but you pay for it later because the
inertial resistance becomes momentum as it begins to accelerate, and you
then have to put a lot of energy in to return it to the state it was in
before. If you rotate the spin axis of a spinning gyroscope through 360
degrees, the net force you experience is zero, but while you are doing this
you experience forces in all the directions.

So it is impossible to get a motion in space from ordinary gyroscopes alone,
but it might be possible in combination with magnetism, if that would help
get a directional push against the Dirac sea of the vacuum. This might be
useful for space vehicles because the temperature is generally low enough in
space for superconductivity and the creation of intense magnetic fields very
cheaply.

"I suspect that we will both agree on the following points.
Correct me if I'm wrong.
(1) There is a dielectric medium pervading what the
establishment consider to be empty vacuum.
(2) Electromagnetic waves (TEM's) propagate in this medium at
the speed of light and transfer energy while doing so.
(3) Cables act as wave guides for TEM's.
Are we in agreement about these two points?"

My comments:

(Aside: The "dielectric medium" is accepted to be the quantum field theory
vacuum in modern physics. It is just convention not to call it ether or
"Dirac sea", and to call it vacuum instead. This is a bit silly, because it
exists in the air as well as in vacuum. It is not a fight with the
establishment to show that SR is false, because Einstein's GR of 1915
already says as much. Einstein admitted that SR is wrong in 1916 and 1920,
because the spacetime fabric of GR has absolute coordinates, you cannot
extend SR to accelerations, you must abandon it and accept general
covariance of the laws of nature instead - which is entirely different from
relativity. Obviously there are plenty of physicists/authors/teachers who
don't know GR and who defend SR, but absolute accelerations in GR proves
they are ignorant.)

Light waves are caused by asymmetries in the normal continuous radiation
exchange between charges. Such asymmetries occur when you accelerate a
charge.

This is why light appears to take all possible routes (path integrals)
through space, etc.

The normal radiation exchange has no particular oscillatory frequency. When
you emit radio waves, you're creating a net periodic force variation in the
existing exchange of vacuum radiation between the charges in the transmitter
aerial and any receiver aerial. The same occurs whether you are emitting
radio waves by causing electrons to accelerate in an aerial, or causing
individual electrons in atoms to change energy levels in a jump.

http://electrogravity.blogspot.com/2006/01/solution-to-problem-with-maxwells.html. Science doesn't progress by admiring a package or rejecting it, but by taking it to pieces, understanding what is useful and correct and less useful and wrong…. Catt thinks that the correct way to approach science is to decide whether personalities are crackpots or geniuses, and to isolate genius and not to build upon it. This whole classification system is quite interesting but contains no science whatsoever. Catt is orthodox in wanting the political type kudos from innovation, praise, money, prizes, fame and all the rest of it. Science however seeks as its aim one thing alone: understanding.

Sadly, Catt thinks he can get away with anything because he is a genius for some computer design or some principle that he stumbled on 30 years ago. Similarly, Brian Josephson's brain went to off** when he won a Nobel Prize and became a mind-matter unificator, see the abstract and unscientific (non mathematical, non physical) "content" of Josephson’s paper:

Catt and Forrest are wrong to say that there are "geometric details" separating the conceptual capacitor from the conceptual transmission line.

There is no defined geometry for a "capacitor" besides two conductors of any shape separated by an insulator such as vacuum.

Catt, Davidson and Walton refuse to use down to earth language, which means nobody will ever know what they are talking about.

Capacitors don't have to be pie-shaped or circular discs, they can be any shape you can imagine. They can be two wires, plates, the plates can be insulated and then rolled up into a "swiss roll" to create the drum-type capacitors with high capacitance.

The physics is not altered in basic principle. Two wires are a capacitor. Connect them simultaneously to the two terminals of a battery, and they charge up as a capacitor.All the talk of the energy having to change direction by 90 degrees when entering the capacitor comes from the conventional symbol used in circuit design, but it doesn't make any difference if the angle is zero. Catt introduces red herrings by "pie shaped" or disc shaped capacitor plates and by the 90 degrees angle issue.

___||______ This is a capacitor (||) with the 90 degree direction change.

Another capacitor:

_________________

………..…..……..

__________________

The overlapped part of the two wire system above is a capacitor without the 90 degree direction change. (This one will annoy Ivor!)

I described and solved the key error in Catt's anomaly which deals with the capacitor=transmission line issue, in the March 2005 issue of Electronics World.

Catt's interest in science is limited to simple algebra, i.e., non-mathematical stuff, stuff without Heaviside's operational calculus, without general relativity and without any quantum mechanics let alone path integrals or other quantum field theory.

He is not interested in physics, because as seen above, he dismisses "ideas" as personal pet theories. He would have done the same if Galileo or whoever came along. Catt has no interest. Of course, I should not complain about Catt's false statement that I (not Catt) am confused. I should just put up with him making offensive false statements. Kepler has a theory that the planets are attracted to the sun by magnetism, with the earth's magnetism as "evidence" in addition to Kepler's own laws. So if Newton had been Kepler's assistant instead of arriving on the scene much later, he could have been accused by Kepler of confusing Kepler's package.

(If anyone doesn't like references to Galileo and Kepler, as being too conceited, then think of someone more suitable like Aristarchus who had the solar system with circular orbits. If he has an associate of Kepler's skill, he would have been able to dismiss Kepler's ellipses as being imperfect and a confusion of Aristarchus with nonsense.)

There is current flowing in a wire provided there is a voltage variation along the wire to cause the flow.

In the plates of the capacitor, the incoming energy causes a field rise along the plate of 0 to say 9 volts initially. (Once the capacitor is half charged up, the rise is only from 4.5 to 9 volts, so the variation in the step voltage is half.)

If the rise time of this 0 to 9 volts is 1 ns, then the distance along the capacitor plate over which the voltage varies from 0 to 9 volts is ct = 30 cm. Catt ignores this but you can see that the physical size of the step front is appreciable in comparison to the size of a capacitor plate (even if it is a fat swiss roll). So you can write the field E = 9/0.3 = 30 v/m along the plate. This causes an electron drift current. In addition, from the time variation aspect in the capacitor plate, the increase in electric field from 0 to 30 v/m over the time of 1 ns causes the current to increase from 0 to its peak value before dropping as the field drops from 30 v/m to 0 v/m when the back part of the logic step (with steady 9 volts, hence E = 0 v/m) arrives.

What is important is to note that the varying electric current makes the capacitor plates behave like radio transmission aerials. The amount of power radiated transversely from a time-varying current (i.e., an accelerated electron) in watts from a non-relativistic (slow drifting) charge is simply P = (e^2)(a^2)/[6(Pi).(Permittivity)c^3] where e is electric charge, a is acceleration, and c is velocity of light. The radiation occurs perpendicular to the direction of the acceleration.

This is what provably creates the current and induces the charge in the other plate:

"Displacement current" is radio. This is hard, proved fact. It disproves the entire approach of Maxwell, which was to falsely claim there is dE/dt causes a current, when the actual mechanism is that the current variation di/dt (caused by dE/dt) accelerates charge causing electromagnetic radiation across the vacuum.

Maxwell:

capacitor charges because dE/dt causes displacement current.

Fact

: capacitor charges because dE/dt causes di/dt which causes electrons in the plate to accelerate and emit electromagnetic radiation transversely (to the other plate).

This does not disprove the existence of vacuum charges which may be polarised by the field. What it does prove is the mechanism for what causes the polarised charges in the vacuum: light speed radiation.

Maxwell's model of electromagnetic radiation, which consists of his equation for "displacement current" added to Faraday's law of induction, is long known to be at odds with quantum theory, so I'm not going to say any more about it.

The great danger in science is where you get hundreds of people speculating without facts, and then someone claims to have experimentally confirmed one of the speculations. Hertz claimed to have proved the details of Maxwell's model by discovering radio. Oc course Faraday had predicted radio without Maxwell's theory back in 1846, when Maxwell was just a small boy. See Faraday's paper "Thoughts on Ray Vibrations", 1846.

Take the +9 volt logic step entering and flooding a transmission line at light speed.

At the front end, the step rises from 0 volts to 9 volts. Thereafter, the voltage is 9 volts.

Hence, there is no electric current - at least there is no electric field mechanism for the electrons to drift along. Electrons aren't gaining any electric potential energy, so they can't accelerate up to any drift speed. Electric current may be caused, however, by the effect of the magnetic field in the opposite conductor of the transmission line.

Charge is not the primitive. Trapped light-speed Poynting-Heaviside energy constitutes charge. I proved this in the April 2003 EW. Don't believe that the superposition principle of quantum mechanics magically prevents real electron spin when you are not measuring the electron: the collapse of the wavefunction is a mathematical artifact from the distinction of the two versions of Schroedinger's equation: time-dependent and time-independent:

Dr Thomas Love of California State University last week sent me a preprint, "Towards an Einsteinian Quantum Theory", where he shows that the superposition principle is a fallacy, due to two versions of the Schroedinger equation: a system described by the time-dependent Schroedinger equation isn’t in an eigenstate between interactions.

"The quantum collapse occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics."

Electric charge is only detected via its electric field effect. The
quantization of charge into electron size units (and sub units for quarks,
which can never be observed by themselves because the energy to separate a
quark exceeds that needed to produce a new pair of quarks from the Dirac
sea/ether) has a mechanism.

It is curious that a gamma ray with 1.022 MeV has exactly the same electric
field energy as an electron plus a proton. Dirac's quantum field theory
mechanism for pair-production, which is the really direct experimental
physics evidence (discovered by Anderson in 1932, passing gamma rays through
lead) for E=mc2, is the vacuum is full virtual electrons and a gamma ray,
with at least 1.022 MeV of energy, knocks a virtual electron out of the
vacuum. The energy it is given makes it a real electron, while the "hole"
it leaves in the ether is a positive charge, a positron.

Dirac's equation is the backbone of quantum field theory, but his ether
process is just conceptual. Pair-production only occurs when a gamma ray
enters strong field near a nucleus with high atomic number, like lead. This
of course is one reason why lead is used to shield gamma rays with energy
over 1 MeV, such as those from Co-60. (Gamma rays from Cs-137 are on
average only 0.66 MeV so are shielded by Compton scattering, which just
depends on electron abundance in the shield, not on the atomic number.
Hence for gamma rays below 1 MeV, shielding depends on getting as many
electrons between you and the source as possible, while for gamma rays above
1 MeV it is preferable to take advantage of pair production using the
nuclear properties of elements of high atomic number like lead. The pairs
of electrons and positrons are stopped very easily because they are charged,
unlike Compton scattered gamma rays.)

Dirac's sea is naive in the sense that the vacuum contains many forms of
radiation mediating different forces, and not merely virtual electrons. You
can more easily deal with pair-production by pointing out that a gamma ray
is a cycle electromagnetic radiation consisting 50% of negative electric
field and 50% of positive.

A strong field deflects radiation and 1.022 MeV is the threshold required
for the photon to break up into two opposite "charges" (opposite electric
field portions). Radiation can be deflected into a curved path by gravity.
The black hole radius is 2GM/c^2, which is smaller than the Planck size for
an electron mass. Conservation of momentum of the radiation is preserved as
the light speed spin. Superposition/wavefunction collapse is a fallacy
introduced by the mathematical discontinuity between the time-dependent and
time-independent forms of Schroedinger's equation when taking a measurement
on a system.

In the Heaviside light speed energy current, electric field is equal in
magnitude to c times the magnetic field, E=cB where each term is a vector
orthagonal to the others. We already know from Feynman-Schwinger
renormalisation that the measured charge and mass of the electron are
smaller than the effective core values, which are shielded by the
polarisation of charges around the core in Dirac's vacuum. The correct
value of the magnetic moment of the electron arises from this model. You
cannot have charge without a magnetic dipole moment, because the electron is
a Heaviside light-speed negative electric field energy current trapped in a
small loop. The electric field from this is spherically symmetric but the
magnetic field lines form a dipole, which is the observed fact. Fundamental
charged particles have a magnetic moment in addition to "electric charge".

Lee Smolin in recent Perimeter Institute lectures, Introduction to Quantum Gravity, showed how to proceed from Penrose’s spin network vacuum to general relativity, by a sum over histories, with each history represented geometrically by a labelled diagram for an interaction. This gets from a quantum theory of gravity (a spin foam vacuum) to a background-independent version of general relativity, which dispenses with restricted/special relativity used as a basis for general relativity by string theorists (the alternative to the spin foam vacuum explored by Smolin and others). See http://christinedantas.blogspot.com/2006/02/hand-of-master-parts-1-and-2.html

Heuristic (trial and error) extension to existing quantum field theory using the spin foam vacuum

It seems that there are two distinct mechanisms for forces to be propagated via quantum field theory. The vacuum propagates long ranges forces (electromagnetism, gravity) by radiation exchange as discussed in earlier papers kindly hosted by Walter Babin, while short-range forces (strong and weak nuclear interactions) are due to the pressure of the spin foam vacuum. The vacuum is below viewed by analogy to an ideal gas in which there is a flux of shadowed radiation and also dispersed particle-caused pressure.

The radiation has an infinite range and its intensity decreases from geometric divergence. The material pressure of the spin foam vacuum is like an ideal gas, with a small mean-free-path, and produces an attractive force with a very short range (like air pressure pushing a suction plunger against a surface, if the gap is too small to allow air to fill the gap). The probabilistic nature of quantum mechanics is then due to the random impacts from virtual particles in the vacuum on a small scale, which statistically average out on a large scale. This model predicts the strength of gravity from established facts and the correct mechanism for force unification at high energy, which does not require supersymmetry:

Conservation of energy for all the force field mediators would imply that the fall in the strength of the strong force would be accompanied by the rise in the strength of the electroweak force (which increases as the bare charge is exposed when the polarised vacuum shield breaks down in high energy collisions), which implies that forces unify exactly without needing supersymmetry (SUSY). For the strength of the strong nuclear force at low energies (i.e., at room temperature):

Heisenberg's uncertainty says

pd = h/(2.Pi)

where p is uncertainty in momentum, d is uncertainty in distance.
This comes from his imaginary gamma ray microscope, and is usually written as a minimum (instead of with "=" as above), since there will be other sources of uncertainty in the measurement process.

This result is used to show that a 80 GeV energy W or Z gauge boson will have a range of 10^-17 m. So it's OK.

Now, E = Fd implies

d = hc/(2.Pi.E) = hc/(2.Pi.Fd)

Hence

F = hc/(2.Pi.d^2)

This force is 137.036 times higher than Coulomb's law for unit fundamental charges.
Notice that in the last sentence I've suddenly gone from thinking of d as an uncertainty in distance, to thinking of it as actual distance between two charges; but the gauge boson has to go that distance to cause the force anyway.
Clearly what's physically happening is that the true force is 137.036 times Coulomb's law, so the real charge is 137.036. This is reduced by the correction factor 1/137.036 because most of the charge is screened out by polarised charges in the vacuum around the electron core:

"... we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum ... amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies)." - arxiv hep-th/0510040, p 71.

The unified Standard Model force is F = hc/(2.Pi.d^2)

That's the superforce at very high energies, in nuclear physics. At lower energies it is shielded by the factor 137.036 for photon gauge bosons in electromagnetism, or by exp(-d/x) for vacuum attenuation by short-ranged nuclear particles, where x = hc/(2.Pi.E)

All the detailed calculations of the Standard Model are really modelling are the vacuum processes for different types of virtual particles and gauge bosons. The whole mainstream way of thinking about the Standard Model is related to energy. What is really happening is that at higher energies you knock particles together harder, so their protective shield of polarised vacuum particles gets partially breached, and you can experience a stronger force mediated by different particles.

Comparison with string theory

Mainstream, M-theory of strings extrapolates the well-tested Standard Model into the force unification domain of 10^16 GeV and above using unobserved extra dimensions and unobserved super-symmetric (SUSY) partners to the normal particles we detect. The Standard Model achieved a critical confirmation with the detection of the short-ranged neutral Z and charged W particles at CERN in 1983. This confirmed the basic structure of electroweak theory, in which electroweak forces have a symmetry and long range above 250 GeV which is broken by the Higgs field mechanism at lower energies, where only the photon (out of electroweak force mediators, photon, Z, W+ and W-) continues to have infinite range.

In 1995, string theorist Edward Witten used M-theory to unify 10 dimensional superstring theory (including SUSY) with 11 dimensional supergravity as a limit. In the April 1996 issue of Physics Today Witten wrote that ‘String theory has the remarkable property of predicting gravity’. Sir Roger Penrose questioned Witten’s claim on page 896 of Road to Reality, 2004: ‘in addition to the dimensionality issue, the string theory approach is (so far, in almost all respects) restricted to being merely a perturbation theory’.

The other uses of string theory are for providing a quantum gravity framework (it allows an spin-2, unobserved graviton-type field, albeit without any predictive dynamics) and SUSY allows unification of nuclear and electromagnetic forces at an energy of 10^16 GeV (way beyond any possible high energy experiment on Earth).

In summary, string theory is not a scientific predictive theory, let alone a tested theory. The spin foam vacuum extension of quantum field theory as currently discussed by Smolin and others is limited to the mathematical connection between the framework of a quantum field theory and general relativity. I think it could be developed into a predictive unified theory very easily, as the components in this and earlier papers are predictive of new phenomena and are also consistent with those theories of modern physics which have been tested successfully. There is no evidence that string theory predictive of anything that could be objectively checked. Peter Woit of Columbia University has come up against difficulty in making the string theory mainstream listen to an objective criticism of the scientific failures of string theory, see:

'The solution to the difficulty of negative energy states [in relativistic quantum mechanics] is due to Dirac [P. A. M. Dirac, Proc. Roy. Soc. (London), A126, p360, 1930]. One defines the vacuum to consist of no occupied positive energy states and all negative energy states completely filled. This means that each negative energy state contains two electrons. An electron therefore is a particle in a positive energy state with all negative energy states occupied. No transitions to these states can occur because of the Pauli principle. The interpretation of a single unoccupied negative energy state is then a particle with positive energy ... It will be apparent that a hole in the negative energy states is equivalent to a particle with the same mass as the electron ... The theory therefore predicts the existence of a particle, the positron, with the same mass and opposite charge as compared to an electron. It is well known that this particle was discovered in 1932 by Anderson [C. D. Anderson, Phys. Rev., 43, p491, 1933].

'Although the prediction of the positron is certainly a brilliant success of the Dirac theory, some rather formidable questions still arise. With a completely filled 'negative energy sea' the complete theory (hole theory) can no longer be a single-particle theory.

'The treatment of the problems of electrodynamics is seriously complicated by the requisite elaborate structure of the vacuum. The filled negative energy states need produce no observable electric field. However, if an external field is present the shift in the negative energy states produces a polarisation of the vacuum and, according to the theory, this polarisation is infinite.

'In a similar way, it can be shown that an electron acquires infinite inertia (self-energy) by the coupling with the electromagnetic field which permits emission and absorption of virtual quanta. More recent developments show that these infinities, while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].

'For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the 'crowded' vacuum is to change these to new constants e' and m', which must be identified with the observed charge and mass. ... If these contributions were cut off in any reasonable manner, m' - m and e' - e would be of order alpha ~ 1/137. No rigorous justification for such a cut-off has yet been proposed.

'All this means that the present theory of electrons and fields is not complete. ... The particles ... are treated as 'bare' particles. For problems involving electromagnetic field coupling this approximation will result in an error of order alpha. As an example ... the Dirac theory predicts a magnetic moment of mu = mu[zero] for the electron, whereas a more complete treatment [including Schwinger's coupling correction, i.e., the first Feynman diagram] of radiative effects gives mu = mu[zero].(1 + alpha/{twice Pi}), which agrees very well with the very accurate measured value of mu/mu[zero] = 1.001...'

This kind of clear-cut physics is more appealing to me than string theory about extra dimensions and such like. There is some evidence that masses for the known particles, can be described by a a two-step mechanism. First, virtual particles in the vacuum (most likely trapped neutral Z particles, 91 GeV mass) interact with one another by radiation by give rise to mass (a kind of Higgs field). Secondly, real charges can associate with a trapped Z particle either inside or outside the polarised veil of virtual charges around the real charge core:

The polarised charge around either a trapped Z particle (OK it is neutral over all, but so is the photon, and the photon's EM cycle is half positive electrif field and half negative, in Maxwell's model of light, so a neutral particle still has electric fields when considering the close-in picture) gives a shielding factor of 137, with an additional factor of twice Pi for some sort of geometric reason, possibly connected to spin/magnetic polarisation. If you spin a loop as seen edge-on, the exposure it receives per unit area falls by a factor of Pi, compared to a non-spinning cylinder, and we are dealing with exchange of gauge bosons like radiation to create forces between spinning particles. The electron loop has spin 1/2, so it rotates 720 degrees to cover a complete revolution like a Mobius strip loop. Thus, it has a reduction factor of twice Pi as seen edge on, and the magnetic alignment which increases the magnetic moment of the electron means that the core electron and the virtual charge in the vacuum are aligned side-on.

Supersymmetry can be completely replaced by physical mechanism and energy conservation of the field bosons:

Supersymmetry is not needed at all because the physical mechanism by which nuclear and electroweak forces unify at high energy automatically leads to perfect unification, due to conservation of energy: as you smash particles together harder, they break through the polarised veil around the cores, exposing a higher core charge so the electromagnetic force increases. My calculation at http://electrogravity.blogspot.com/2006/02/heisenbergs-uncertainty-sayspd-h2.html suggests that the core charge is 137 times the observed (long range) charge of the electron. However, simple conservation of potential energy for the continuously-exchanged field of gauge bosons shows that this increase in electromagnetic field energy must be conpensated for by a reduction in other fields as collision energy increases. This will reduce the core charge (and associated strong nuclear force) from 137 times the low-energy electric charge, compensating for the rising amount of energy carried by the electromagnetic field of the charge at long distances.

Hence, in sufficiently high energy collisions, the unified force will be some way intermediate in strength between the low-energy electromagnetic force and the low-energy strong nuclear force. The unified force will be attained where the energy is sufficient to completely break through the polarised shield around the charge cores, possibly at around 10^16 GeV as commonly suggested. A proper model of the physical mechanism would get rid of the Standard Model problems of unification (due to incomplete approximations used to extrapolate to extremely high energy): http://electrogravity.blogspot.com/2006/02/heuristic-explanation-of-short-ranged_27.html

So I don't think there is any scientific problem with sorting out force unification without SUSY in the Standard Model, or of including gravity (http://feynman137.tripod.com/). The problem lies entirely with the mainstream preoccupation with string theory. Once the mainstream realises it was wrong, instead of admitting it was wrong, it will just use its preoccupation with string theory as the excuse for having censored alternative ideas.

The problem is whether Dr Peter Woit can define crackpottery to both include the mainstream string theory, and exclude some alternatives which look far-fetched or crazy but have a more realistic change of being tied to facts, and making predictions which can be tested. With string theory, Dr Woit finds scientific problems. I think the same should be true of alternatives, which should be judged on scientific criteria. The problem is that the mainstream stringers don't use scientific grounds to judge either their own work or alternatives. They say they are right because they are a majority, and alternatives are wrong because they are in a minority.

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ –

Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303. Note: statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum is full of gauge bosons carrying momentum like light, and exerting vast pressure; this gives the foam vacuum.

I've just updated a previous post here with some comments on the distinction between the two aspects of the strong nuclear force, that between quarks (where the physics is very subtle, with interactions between the virtual quark field and the gluon field around quarks leading to a modification of the strong nuclear force and the effect of asymptotic freedom of quarks within hadrons), and that between one nucleon and another.

Nucleons are neutrons and protons, each containing three quarks, and the strong nuclear force between nucleons behaves as if neutrons are practically identical to protons (the electric charge is an electromagnetic force effect). Between individual quarks, the strong force is mediated by gluons and is more complex due to screening effects of the colour charges of the quarks, but between nucleons it is mediated by pions, and is very simple, as my previous post shows.

Consider why the nuclear forces short-ranged, unlike gravity and electromagnetism. The Heisenberg uncertainty principle in its time-energy form sets a limit to the amount of time a certain amount of energy (that of the fause-mediating particles) can exist. Because of the finite speed of light, this time limit is equivalent to a distance limit or range. This is why nuclear forces are short-ranged. Physically, the long-range forces (gravity and electromagnetism) are radiation exchange effects which aren't individually attenuated with distance, but just undergo geometric spreading over wider areas due to divergence (giving rise to the inverse-square law).

But the short-ranged nuclear forces are physically equivalent to a gas-type pressure of the vacuum. The 14.7 pounds/square inch air pressure doesn't push you against the walls, because air exists between you and the walls, and disperses kinetic energy as pressure isotropically (equally in all directions) due to the random scattering of air molecules. The range over which you are attracted to the wall due to the air pressure is around the average distance air molecules go between rancom scattering impacts, which is the mean free path of an air molecule, about 0.1 micron (micron = micrometre).

This is why to get 'attracted' to a wall using air pressure, you need a very smooth wall and a clean rubber suction cup: it is a short-ranged effect. The nuclear forces are similar to this in their basic mechanism, with a short range because of the collisions and interactions of the force-mediating particles, which are more like gas molecules than the radiations which give rise to gravity and electromagnetism. We know this for the electroweak theory, where at low energies the W and Z force mediators are screened by the foam vacuum of space, while the photon isn't.

Deceptions used to attack predictive, testable physical understanding of quantum mechanics: (1) metaphysically-vague entanglement of the wavefunctions of photons in Alain Aspect's ESP/EPR supposed experiment, which merely demonstrates a correlation in the polarisation of photons emitted from the same source in opposite directions and measured. This correlation is expected if Heisenberg's uncertainty principle does NOT apply to photon measurement. We know Heisenberg's uncertainty principle DOES apply to measuring electrons and other non-light speed particles, which have time to respond to the measurement by being deflected or changing state. Photons, however, must be absorbed and then re-emitted to change state or direction. Therefore, correlation of identical photon measurements is expected based on the failure of the uncertainty principle to apply to the measurement process of photons. It is hence entirely fraudulent to claim that the correlation is due to metaphysically-vague entanglement of wave functions of photons metres apart travelling in opposite directions. (2) Young's double slit experiment: Young falsely claimed that light somehow cancels out at the dark fringes on the screen. But we know energy is conserved. Light simply doesn’t arrive at the dark fringes (if it does, what happens to it, especially where you fire one photon at a time!!!!!!????). What really happens with light is interference near the double slits, not at the screen, which is not the case for water wave type interference (water waves are longitudinal so interfere at the screen, light waves have a transverse feature which allows interference to occur even when a single photon passes through one of two slits, if the second slit is nearby, i.e., within a wavelength or so!). (3) Restricted ('Special') Relativity:

I just don't believe you [Lubos Motl] don't understand that general covariance in GR is the important principle, that accelerations are not relative and that all motions at least begin and end with acceleration/deceleration.

The radiation (gauge bosons) and virtual particles in the vacuum exert pressure on moving objects, compressing them in the direction of motion. As FitzGerald deduced in 1889, it is not a mathematical effect, but a physical one. Mass increase occurs because of the snowplow effect of Higgs boson (mass ahead of you) when you move quickly, since the Higgs bosons you are moving into can't instantly flow out of your path, so there is mass increase. If you were to approach c, the particles in the vacuum ahead of you would be unable to get out of your way, you'd be going so fast, so your mass would tend towards infinity. This is simply a physical effect, not a mathematical mystery. Time dilation occurs because time is measured by motion, and if as the Standard Model suggests, fundamental spinning particles are just trapped energy (mass being due to the external Higgs field), that energy is going at speed c, perhaps as a spinning loop or vibrating string. When you move that at near speed c, the internal vibration and/or spin speed will slow down, because c would be violated otherwise. Since electromagnetic radiation is a transverse wave, the internal motion at speed x is orthagonal to the direction of propagation at speed v, so x^2 + v^2 = c^2 by Pythagoras. Hence the dynamic measure of time (vibration or spin speed) for the particle is x/c = (1 - v^2/c^2)^1/2, which is the time-dilation formula.
As Eddington said, light speed is absolute but undetectable in the Michelson-Morley experiment owing to the fact the instrument contracts in the direction of motion, allowing the slower light beam to cross a smaller distance and thus catch up.

‘The Michelson-Morley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus…. The great stumbing-block for a philosophy which denies absolute space is the experimental detection of absolute rotation.’ – Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919), Space Time and Gravitation: An Outline of the General Relativity Theory, Cambridge University Press, Cambridge, 1921, pp. 20, 152.

Einstein said the same:

‘Recapitulating, we may say that according to the general theory of relativity, space is endowed with physical qualities... According to the general theory of relativity space without ether is unthinkable.’ – Albert Einstein, Leyden University lecture on ‘Ether and Relativity’, 1920. (Einstein, A., Sidelights on Relativity, Dover, New York, 1952, pp. 15-23.)

Maxwell failed to grasp that radiation (gauge bosons) was the mechanism for electric force fields, but he did usefully suggest that:

‘The ... action of magnetism on polarised light [discovered by Faraday not Maxwell] leads ... to the conclusion that in a medium ... is something belonging to the mathematical class as an angular velocity ... This ... cannot be that of any portion of the medium of sensible dimensions rotating as a whole. We must therefore conceive the rotation to be that of very small portions of the medium, each rotating on its own axis [spin] ... The displacements of the medium, during the propagation of light, will produce a disturbance of the vortices ... We shall therefore assume that the variation of vortices caused by the displacement of the medium is subject to the same conditions which Helmholtz, in his great memoir on Vortex-motion, has shewn to regulate the variation of the vortices [spin] of a perfect fluid.’ - Maxwell’s 1873 Treatise on Electricity and Magnetism, Articles 822-3

Compare this to the spin foam vacuum, and the fluid GR model:

‘… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that ‘flows’... A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Professor Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp. 89-90.

Einstein admitted SR was tragic:

‘The special theory of relativity … does not extend to non-uniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant). …’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916.

To understand what the vector boson radiation is (photons having spin 1 and stringy speculative "gravitons" having spin 2) we need to understand the electromagnetic unification of Maxwell. It is all perfect except the "displacement current" term which is added to Ampere's current to complete continuity of circuit current in a charging capacitor with a vacuum dielectric.

The continuum is composed of radiation! There are also trapped particles in the vacuum which are responsible for the quantized masses of fundamental particles, leptons and the pairs and triads of quarks in hadrons. The change in my approach is due to physical understanding of the displacement current term in Maxwell's equations. Since about 2000 I've been pushing this way, hoping Catt would help, but he is not interested in progress beyond Heaviside's model. See my recent blog post:

Mathematically Maxwell's trick works, since you put the "displacement current" law together with Faraday's law of induction and the solution is Maxwell's light model, predicting the correct speed of light. However, this changes when you realise that displacement current is itself really electromagnetic radiation, and acts at 90 degrees to the direction light propagates in Maxwell's model. Maxwell's model is entirely self-contradictory, and so his unification of electricity and magnetism is not physical!Maxwell's unification is wrong, because the reality is that the "displacement current" effects result from electromagnetic radiation emitted transversely when the current varies with time (hence when charges accelerate) in response to the time-varying voltage. This completely alters the picture we have of what light is!Comparison:

True model to replace fully "displacement current": Voltage varying with time accelerates charges in the conductor, which as a result emit radiation transversely.I gave logical arguments for this kind of thing (without the full details I have recently discovered) in my letter published in the March 2005 issue of Electronics World. Notice that Catt uses a completely false picture of electricity with discontinuities (vertically abrupt rises in voltage at the front of a logic pulse) which don't exist in the real world, so he does not bother to deal with the facts and missed the mechanism. However Catt is right for arguing that the flaw in Maxwell's classical electromagnetism stems to the ignorance Maxwell had of the way current must spread along the plates at light speed.

It seems that the electromagnetic force-carrying radiation is also the cause
of gravity, via particles which cause the mass of charged elementary
particles.

The vacuum particles ("higgs particle") that give rise to all mass in the
Standard Model haven't been observed officially yet, and the official
prediction of the energy of the particle is very vague, similar to the Top
Quark mass, 172 GeV. However, my argument is that the mass of the uncharged
Z-boson, 91 GeV, determines the masses of all the other particles. It
works. The charged cores of quarks, electrons, etc., couple up (strongly or
weakly) with a discrete number of massive trapped Z-bosons which exist in
the vacuum. This mechanism also explains QED, such as the magnetic moment
of the electron 1 + alpha/(2Pi) magnetons.

Literally, the electromagnetic force-causing radiation (vector bosons)
interact with charged particle cores to produce EM forces, and with the
associated "higgs bosons" (gravitationally self-trapped Z-bosons) to produce
the correct inertial masses and gravity for each particle.

The lepton and hadron masses are quantized, and I've built a model,
discussed there and on my blog, which takes this model and uses it to
predict other things. I think this is what science is all about. The
mainstream (string theory, CC cosmology) is too far out, and unable to make
any useful predictions.

As for the continuum: the way to understand it is through correcting
Maxwell's classical theory of the vacuum. Quantum field theory accounts for
electrostatic (Coulomb) forces vaguely with a radiation-exchange mechanism.
In the LeSage mechanism, the radiation causing Coulomb's law causes all
forces by pushing. I worked out the mechanism by which electric forces
operate in the April 2003 EW article; attraction occurs by mutual shielding
as with gravity, but is stronger due to the sum of the charges in the
universe. If you have a series of parallel capacitor plates with different
charges, each separated by a vacuum dielectric, you need the total (net)
voltage needs to take into account the orientation of the plates.

The vector sum is the same as a statistical random walk (drunkard's walk):
the total is equal to the average voltage between a pair of plates,
multiplied by the square root of the total number (this allows for the
angular geometry dispersion, not distance, because the universe is
spherically symmetrical around us - thank God for keeping the calculation
very simple! - and there is as much dispersion outward in the random walk as
there is inward, so the effects of inverse square law dispersions and
concentrations with distance both exactly cancel out).

Gravity is the force that comes from a straight-line sum, which is the only
other option than the random walk. In a straight line, the sum of charges
is zero along any vector across the universe, if that line contains an
average equal number of positive and negative charges. However, it is
equally likely that the straight radial line drawn at random across the
universe contains an odd number of charges, in which case the average charge
is 2 units (2 units is equal to the difference between 1 negative charge and
1 positive charge). Therefore the straight line sum has two options only,
each with 50% probability: even number of charges and hence zero net result,
and odd number of charges which gives 2 unit charges as the net sum. The
mean for the two options is simply (0 + 2) /2 = 1 unit. Hence
electromagnetism is the square root of the number of charges in the
universe, times the weak option force (gravity).

Thus, electromagnetism and gravity are different ways that charges add up.
Electric attraction is as stated, simply a mutual blocking of EM "vector
boson" radiation by charges, like LeSage gravity. Electric repulsion is an
exchange of radiation. The charges recoil apart because the underlying
physics in an expanding universe (with "red-shifted" or at least reduced
energy radiation pressing in from the outside, due to receding matter in the
surrounding universe) means their exchange of radiation results in recoil
away from one another (imagine two people firing guns at each other, for a
simple analogy; they would recoil apart).

Magnetic force is apparently, as Maxwell suggested, due to the spins of the
vacuum particles, which line up. We’ll examine the details further on.

(2) Consider objects moving past the sun, gaining gravitational potential energy, and being deflected by gravity. The mean angle of the object to the radial line from the gravity force from the sun is 90 degrees, so for slow-moving objects, 50% of the energy is used in increasing the speed of the object, and 50% in deflecting the path. But because light cannot speed up, 100% of the gravitational potential energy gained by light on its approach to the sun is used to deflection, so this is the mechanism why light suffers twice the deflection suggested by Newton’s law. Hence for light deflection: R_uv = 8.Pi(G/c^2)T_uv.

(3) To unify the different equations in (1) and (2) above, you have to modify (2) as follows: R_uv - 0.5Rg_uv = 8.Pi(G/c^2)T_uv, where g_uv is the metric. This is the Einstein-Hilbert field equation.

GR is based entirely on empirical facts. Speculation only comes into it after 1915, via the "cosmological constant" and other "fixes". Think about the mechanism for the gravitation and the contraction which constitute pure GR: it is quantum field theory, radiation exchange.

Fundamental particles have spin which in an abstract way is related to vortices. Maxwell in fact argued that magnetism is due to the spin alignment of tiny vacuum field particles.

The problem is that electron is nowadays supposed to be in an almost metaphysical superposition of spin states until measured, which indirectly (via the EPR-Bell-Aspect work) leads to the entanglement concept you mention. But Dr Thomas Love of California State University last week sent me a preprint, "Towards an Einsteinian Quantum Theory", where he shows that the superposition principle is a fallacy, due to two versions of the Schroedinger equation: a system described by the time-dependent Schroedinger equation isn’t in an eigenstate between interactions.

"The quantum collapse occurs when we model the wave moving according to Schroedinger (time-dependent) and then, suddenly at the time of interaction we require it to be in an eigenstate and hence to also be a solution of Schroedinger (time-independent). The collapse of the wave function is due to a discontinuity in the equations used to model the physics, it is not inherent in the physics."

Maxwell failed to grasp that radiation (gauge bosons) was the mechanism for electric force fields, but he did usefully suggest that:

‘The ... action of magnetism on polarised light [discovered by Faraday not Maxwell] leads ... to the conclusion that in a medium ... is something belonging to the mathematical class as an angular velocity ... This ... cannot be that of any portion of the medium of sensible dimensions rotating as a whole. We must therefore conceive the rotation to be that of very small portions of the medium, each rotating on its own axis [spin] ... The displacements of the medium, during the propagation of light, will produce a disturbance of the vortices ... We shall therefore assume that the variation of vortices caused by the displacement of the medium is subject to the same conditions which Helmholtz, in his great memoir on Vortex-motion, has shewn to regulate the variation of the vortices [spin] of a perfect fluid.’ - Maxwell’s 1873 Treatise on Electricity and Magnetism, Articles 822-3

Compare this to the spin foam vacuum, and the fluid GR model:

‘… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that ‘flows’... A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Professor Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp. 89-90.

Einstein admitted SR was tragic: ‘The special theory of relativity … does not extend to non-uniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant). …’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916.

‘Recapitulating, we may say that according to the general theory of relativity, space is endowed with physical qualities... According to the general theory of relativity space without ether is unthinkable.’ – Albert Einstein, Leyden University lecture on ‘Ether and Relativity’, 1920. (Einstein, A., Sidelights on Relativity, Dover, New York, 1952, pp. 15-23.)

‘The Michelson-Morley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus…. The great stumbing-block for a philosophy which denies absolute space is the experimental detection of absolute rotation.’ – Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919), Space Time and Gravitation: An Outline of the General Relativity Theory, Cambridge University Press, Cambridge, 1921, pp. 20, 152.

The radiation (gauge bosons) and virtual particles in the vacuum exert pressure on moving objects, compressing them in the direction of motion. As FitzGerald deduced in 1889, it is not a mathematical effect, but a physical one. Mass increase occurs because of the snowplow effect of Higgs boson (mass ahead of you) when you move quickly, since the Higgs bosons you are moving into can't instantly flow out of your path, so there is mass increase. If you were to approach c, the particles in the vacuum ahead of you would be unable to get out of your way, you'd be going so fast, so your mass would tend towards infinity. This is simply a physical effect, not a mathematical mystery. Time dilation occurs because time is measured by motion, and if as the Standard Model suggests, fundamental spinning particles are just trapped energy (mass being due to the external Higgs field), that energy is going at speed c, perhaps as a spinning loop or vibrating string. When you move that at near speed c, the internal vibration and/or spin speed will slow down, because c would be violated otherwise. Since electromagnetic radiation is a transverse wave, the internal motion at speed x is orthagonal to the direction of propagation at speed v, so x^2 + v^2 = c^2 by Pythagoras. Hence the dynamic measure of time (vibration or spin speed) for the particle is x/c = (1 - v^2/c^2)^1/2, which is the time-dilation formula.As Eddington said, light speed is absolute but undetectable in the Michelson-Morley experiment owing to the fact the instrument contracts in the direction of motion, allowing the slower light beam to cross a smaller distance and thus catch up.

Dr Love helpfully quotes Einstein's admissions that the covariance of the general relativity theory violates the idea in special relativity that the velocity of light is constant:

'This was ... the basis of the law of the constancy of the velocity of light. But ... the general theory of relativity cannot retain this law. On the contrary, we arrived at the result according to this latter theory, the velocity of light must always depend on the coordinates when a gravitational field is present.' - Albert Einstein, Relativity, The Special and General Theory, Henry Holt and Co., 1920, p111.

So general relativity conflicts with, and supersedes, special relativity. General relativity says goodbye to the law of the invariant velocity of light which was used in a fiddle, special relativity:

'... the principle of the constancy of the velocity of light in vacuo must be modified, since we easily recognise that the path of a ray of light ... must in general be curvilinear...' - Albert Einstein, The Principle of Relativity, Dover, 1923, p114.

The error with special relativity (which is incompatible with general relativity, since general relativity allows the velocity of light to depend on the coordinate system, and special relativity does not) is therefore the assumption that the spacetime reference frame changes when contraction occurs. In fact, the matter just contracts, due to the vacuum (gauge boson) force mechanism of quantum field theory, so you need to treat the spacetime of the vacuum separately from that of the matter. In general, Walter Babin’s point is valid where he suggests that Special Relativity’s insistence upon an invariant velocity of light and variable coordinate system should be replaced by a fixed coordinate system for covariance in general relativity, with the variable being the velocity of light (which varies according to general relativity, and that is a more general theory than special relativity):

Your first argument, which suggests that a "constant speed of light + varying reference frame" is at best equivalent to more sensible "varying speed of light + fixed reference frame", intuitively appeals to me.

The contraction of 1 kg ruler 1 metre long to 86.6 centimetres in the direction of motion when travelling at c/2, is a local contraction of the material making up the ruler. The energy that causes the contraction must be the energy injected. Inertia is the force needed to overcome the pressure of the spacetime fabric. The mass increase of that ruler to 1.15 kg is explained by the spacetime fabric which is limited in speed to a maximum of c, and can't flow out of the way fast enough when you approach c, so the inertial resistance (and hence inertial mass) increases. The Standard Model of nuclear physics already says that mass is entirely caused by the vacuum "Higgs field". This seems already to violate the meaning commonly given to E=mc^2, since if m is due to the Higgs field surrounding matter possessing electromagnetic field energy E, then mass and energy are not actually identical at all, but one is simply associated with the other, just as a man and a woman are associated by marriage!

What is really going on is that objects are physically contracting in the direction of their motion when accelerated. You use 50 Joules of energy accelerate a 1 kg mass up to 10 m/s. Surely the energy you need to start something moving is physically the energy used to contract all the atoms in direction of motion?

Length contraction is real, but it is the physical material of the Michelson-Morley instrument that is being contracted for sure, so is absolute speed because (as FitzGeald showed) the Michelson-Morley result is explained by contraction in the direction of motion for a Maxwellian absolute speed of light (Maxwell predicted absolute speed of light in the Michelson-Morley experiment, which he suggested, although he died long before the experiment was done). Nowadays, history is so "revised" that some people claim falsely that Maxwell predicted relativity from his flawed classical mathematical model of a light wave!

Special relativity is a mathematical obfuscation used to get rid of the mechanical basis for the length contraction formula for the Michelson-Morley experiment. FitzGerald did this in 1889. Furthermore Einstein claims Maxwell's equations suggested relativity, but Maxwell was an aether theorist and interpreted his equations the opposite way. Of course Maxwell didn't predict the FitzGerald contraction, because his aether model was wrong. Joseph Larmor published a mechanical aetherial prediction of the time-dilation formula in his 1901 book "Aether and matter". Larmor is remembered in physics today only for his equation for the spiral of electrons in a magnetic field.

I was surprised a decade ago to find that Eddington dismissed special relativity in describing general relativity in his 1920 book. Eddington says special relativity is wrong because accelerative motion is absolute (as measured against approximately "fixed" stars, for example): you rotate a bucket of water and you can see the surface indent. We know special relativity is just an approximation and that general relativity is deeper, because it deals with accelerations which are always needed for motion (for starting and stopping, before and after uniform motion).

In general relativity, the spacetime fabric pressure causes gravity by some kind of radiation LeSage mechanism, and the same mechanism causes the contraction term.

Einstein said that spacetime is four-dimensional and curved.

The Earth is contracted by 1.5 mm due to the contraction term in general relativity, which is given mathematically in the usual treatment by energy conservation of the gravitation field. But you can physically calculate the general relativity contraction from the FitzGerald contraction of length by the factor (1 – v2/c2)1/2 = [1 – 2GM/(xc2)]1/2. I obtain this starting with the Newtonian approximate empirical formula, which gives the square of escape velocity as v2 = 2GM/x, and the logical fact that the energy of mass in a gravitational field at radius x from the centre of mass is equivalent to the energy of an object falling there from an infinite distance, which by symmetry is equal to the energy of a mass travelling with escape velocity v. By Einstein’s principle of equivalence between inertial and gravitational mass, this gravitational acceleration field produces an identical effect to ordinary motion. Therefore, we can place the square of escape velocity (v2 = 2GM/x) into the Fitzgerald-Lorentz contraction (1 – v2/c2)1/2 which gives the gravitational contraction [1 – 2GM/(xc2)]1/2 ~ 1 – GM/(xc2) using the first two terms in the binomial expansion.

This is a physical mechanism for the essential innovation of general relativity, the contraction term in the Einstein-Hilbert field equation. Because the contraction due to motion is physically due to head-on pressure (like wind pressure on your windscreen at high speeds) from the spacetime fabric, it occurs only in the direction of motion, say the x direction, leaving the size of the mass in directions x and z unaffected.

For gravity, the mechanism of spacetime fabric pressure causes contraction in the radial directions, outward from the centre of mass. This means the amount of contraction is as Feynman calculated about (1/3)GM/c2 = 1.5 mm for the Earth.

If you look at Feynman's account of this, which is one of the most physically real, he gets his equation confused in words: Professor Feynman makes a confused mess of it in his relevant volume of Lectures, c42 p6, where he gives his equation 42.3 correctly for excess radius being equal to predicted radius minus measured radius, but then on the same page in the text says ‘… actual radius exceeded the predicted radius …’ Talking about ‘curvature’ when dealing with radii is not helpful and probably caused the confusion.

‘The Michelson-Morley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus…. The great stumbing-block for a philosophy which denies absolute space is the experimental detection of absolute rotation.’ – Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919), Space Time and Gravitation, Cambridge University Press, Cambridge, 1921, pp. 20, 152.

I think Eddington's comments above are right. The speed of light is absolute but this is covered-up by the physical contraction of the Michelson-Morley experiment in the direction of motion, so the result was null. What I want to ask is whether special relativity is self-contradictory here, because special relativity has both contraction and invariant speed of light, which taken together look incompatible with the Michelson-Morley result.

To be clear, FitzGerald's empirical theory is "physical contraction due to ether pressure + Michelson Morley result => variable speed of light depending on motion".

So special relativity is ad hoc and is completely incompatible with FitzGerald's prior analysis. Since experimental data only verifies the resulting equations, Ockham's razor tells us to accept FitzGerald's simple analysis of the facts, and to neglect the speculation of special relativity. Furthermore, even Einstein agrees with this:

‘The special theory of relativity … does not extend to non-uniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant). …’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916.

We know that there is a background force-causing spacetime radiation fabric from quantum field theory and from the correction of Maxwell's extra term (allegedly vacuum current but actually electromagnetic radiation; Maxwell thought that "displacement current" is due to the variation in voltage or electric field, when it is really electromagnetic radiation emitted due to the variation in electric current on a charging capacitor plate which behaves a bit like a radio aerial, see:

One question I do have, Walter, is what we are trying to get out of this. I think it is going to be a very hard job to oust special relativity, for numerous reasons. However, it is necessary to get quantum gravity resolved and to dispense with outspoken pro-special relativity string theorists.

The equations from special relativity, in so much as they can also be obtained by other arguments from observations (FitzGerald, Lorentz, Larmor, etc), are useful.

Personally, I think the public relations aspect is paramount. Probably it is an error to attack Einstein or to disprove special relativity without giving a complete mathematical replacement. I do know that quantum field theory says that the virtual particles of the vacuum look different to observers in different motion, violating special relativity's "Lorentzian invariance" unless that specifically applies to the real contraction of material moving within the spacetime fabric, and the slowing down of physical processes, plus the piling up of the Higgs field at the "bow" of a relativistic particle to cause mass increase. This is extremely heretical as I will show.

Certainly nobody in any position of influence in physics wants to lose that position of influence by being name-called a 'crackpot', as Professor Lubos Motl of Harvard has done to me and others today at

"I am at least trying to inhibit the kind of 'discussion' in the direction of ... Nigel Cook, and so many others... what these crackpots are saying..."

Notice that string theory is entirely speculative, but Professor Lubos Motl states that it is not crackpot, without providing any evidence for 10 dimensions and unobserved superpartners, or gravitons. It is entirely consistent that tax-payer funded people, who get money for speculation dressed up as science (the old name for such people in the medical arena is quack), align themselves with others. On Motl's blog, Michael Varney, a graduate research student who co-authored the paper published by Nature, Upper limits to submillimetre-range forces from extra space-time dimensions (which neither confirms not denies abject speculation of string theory), used that paper to assert his rights to abuse call other people crackpot. He cites as authority a crank.net internet site run by Erik Max Francis, described impressively by Bonnie Rothman Morris in the New York Times of Dec. 21, 2000 as 'not a scientist, and has taken only a handful of classes at a community college'. Erik has a list of sites of people suppressed by the mainstream for many reasons, labels them crackpot or crank. He does not include his own claim to have proved Kepler's laws from other laws based on Kepler's laws (a circular crackpot argument), but presents his crackpotism separately. The New York Times article, which generally supports bigotry (

http://www.greatdreams.com/nyt10198.htm) mentions that: 'Phil Plaitt, the Web master of Bad Astronomy started his site (www.badastronomy.com) ... [is] an astronomer and a friend of Mr. Francis.' This association with bigotry agrees with my experience of being suppressed from Plaitt's discussion forum two years ago by bigots, supported by the moderator (whoever that anonymous person was) who didn't accept any part of the big bang, despite the facts here: http://www.astro.ucla.edu/~wright/tiredlit.htm

We already know from the +/- 3 mK cosine variation in the 2.7 K microwave background that there is a motion of the galaxy at 400 km/s toward andromeda. This is probably largely due to the gravity of andromeda, but it does indicate a kind of absolute motion. Taking the 400 km/s as an order-of-magnitude figure of the motion of matter in the milky way in the 1.5 x 1010 yr since the big bang, that indicates we've moved 0.1% of the radius of the universe since the big bang. Hence, we are near the middle if we treat the big bang as a type of explosion. You know the regular "expanding cake" model which tries to mathematically fit cosmology to general relativity equations without gravity dynamics (quantum gravity) to the big bang with everything receding from everything else, but the problem is that nobody has ever looked at the universe from different places, so they really don't know, they're just speculating. The fact that so many epicycles are required using that approach (evolving dark energy being the latest), and are resolved in the correct mechanism shows the need employ the dynamics of quantum gravity to obtain the general relativity result, as shown here. Spacetime is still created in the instant of big bang. Clearly, Walter, there is a mixture of outright bigotry and ignorance in the scientific community, in addition to the usual attitude that most genuine physicists know there are problems between general relativity and quantum field theory, but keep silent if they don't have any constructive ideas on resolving these problems. The dynamics predict general relativity and gravity constant G within 2%, as shown on my home page and on one of the papers you kindly host.

Yours sincerely,

Nigel

General relativity has to somehow allow the universe's spacetime to expand
in 3 dimensions around us (big bang) while also allowing gravitation to
contract the 3 dimensions of spacetime in the earth, causing the earth's
radius to shrink by 1.5 millimetres, and (because of spacetime) causing time
on the Earth to slow down by 1.5 parts in 6,400,000,000 (i.e., 1.5 mm in the
Earth's radius of 6,400 km). This is the contraction effect of general
relativity, which contracts distances and slows time.

The errors of general relativity being force-fitted to the universe as a
whole are obvious: the outward expansion of spacetime in the big bang causes
the inward reaction on the spacetime fabric which causes the contraction as
well as gravity and other forces. Hence, general relativity is a
local-scale resultant of the big bang, not the cause or the controlling
model of the big bang. The conventional paradigm confuses cause for effect;
general relativity is an effect of the universe, not the cause of it. To me
this is obvious, to others it is heresy.

What is weird is that Catt cling's on to horseshit from crackpots which is
debunked (rather poorly) here:

http://www.astro.ucla.edu/~wright/tiredlit.htm. A better way to debunk all
the horseshit anti-expansion stuff is to point out that the cosmic
background radiation measured for example by the COBE satellite in 1992, is
BOTH (a) the most red-shifted radiation (it is red-shifted by a factor of
1000, from a temperature of 3000 K infra-red to 2.7 K microwaves), and (b)
the most perfect blackbody (Planck) radiation spectrum ever observed. The
only mechanism for a uniform red-shift by the same factor at all frequencies
is recession, for the known distribution of masses in the universe. The
claims that the perfectly sharp and uniformly shifted light from distant
stars has been magically scattered by clouds of dust without diffusing the
spectrum or image is horseshit, like claiming the moon-landings are a hoax.
The real issue is that the recession speeds are observations which apply to
fixed times past, as a certain fact, not to fixed distances. Hence the
recession is a kind of acceleration (velocity/time) for the observable
spacetime which we experience. This fact leads to outward force F=ma =
10^43 N, and by Newton's 3rd law equal inward force which predicts gravity
via an improved, QFT-consistent LeSage mechanism.

The mechanism behind the deflection of light by the sun is that everything, including light, gains gravitational potential energy as it approaches a mass like the sun.

Because the light passes perpendicularly to the gravity field vector at closes approach (average deflection position), the increased gravitational energy of a slow moving body would be used equally in two ways: 50% of the energy would go into increasing the speed, and 50% into changing the direction (bending it towards the sun).

Light cannot increase in speed, so 100% of the gained energy must go into changing the direction. This is why the deflection of light by the sun is exactly twice that predicted for slow-moving particles by Newton's law. All GR is doing is accounting for energy.

This empiricist model accurately predicts the value of G using cosmological data (Hubble constant and density of universe), eliminating most dark matter in the process. It gets rid of the need for inflation since the effective strength of gravity at 300,000 years was very small, so the ripples were small.

=> No inflation needed. All forces (nuclear, EM, gravity) are in constant ratio because all have inter-related QFT energy exchange mechanisms. Therefore the fine structure parameter 137 (ratio of strong force to EM) remains constant, and the ratio of gravity to EM remains constant.

The sun's radiating power and nuclear reactions in the 1st three minutes are not affected at all by variations in the absolute strengths of all the fundamental forces, since they remain in the same ratio.

Thus, if you double gravity and nuclear and EM force strengths are also doubled, the sun will not shine any differently than now. The extra compression due to an increase in gravity would be expected to increase the fusion rate, but the extra Coulomb repulsion between approaching protons (due to the rise in EM force), cancels out the gravitational compression.

So the ramshackle-looking empiricist model does not conflict at all with the nucleosynthesis of the BB, or with stellar evolution. It does conflict with the CC and inflation, but those are just epicycles in the mainstream model, not objective facts.

This is hyped up to get media attention: the CBR from 300,000 years after BB says nothing of the first few seconds, unless you believe their vague claims that the polarisation tells something about the way the early inflation occurred. That might be true, but it is very indirect.

I do agree with Sean on CV that n = 0.95 may be an important result from this analysis. I’d say it’s the only useful result. But the interpretation of the universe as 4% baryons, 22% dark matter and 74% dark energy is a nice fit to the existing LambdaCDM epicycle theory from 1998. The new results on this are not too different from previous empirical data, but this ‘nice consistency’ is a euphemism for ‘useless’.

WMAP has produced more accurate spectral data of the fluctuations, but that doesn’t prove the ad hoc cosmological interpretation which was force-fitted to the data in 1998. Of course the new data fits the same ad hoc model. Unless there was a significant error in the earlier data, it would do. Ptolemies universe, once fiddled, continued to model things, with only occasional ‘tweaks’, for centuries. This doesn’t mean you should rejoice.

Dark matter, dark energy, and the tiny cosmological constant describing the dark energy, remain massive epicycles in current cosmology. The Standard Model has not been extended to include dark matter and energy. It is not hard science, it’s a very indirect interpretion of the data. I’ve got a correct prediction made without a cosmological constant made and published in ‘96, years before the ad hoc Lambda CDM model. Lunsford’s unification of EM and GR also dismisses the CC.

Secret milkshake, I agree! The problem religion posed in the past to science was insistence on the authority of scripture and accepted belief systems over experimental data. If religion comes around to looking at experimental data and trying to go from there, then it becomes more scientific than certain areas of theoretical physics. Does anyone know what Barrow has to say about string theory?

I learnt a lot of out-of-the-way ‘trivia’ from ‘The Anthropic Cosmological Principle’, particularly the end notes, e.g.:

‘… should one ascribe significance to empirical relations like m(electron)/m(muon) ~ 2{alpha}/3, m(electron)/m(pion) ~ {alpha}/2 … m(eta) - 2m(charged pion) = 2m(neutral pion), or the suggestion that perhaps elementary particle masses are related to the zeros of appropriate special functions?’

By looking at numerical data, you can eventually spot more ‘coincidences’ that enable empirical laws to be formulated. If alpha is the core charge shielding factor by the polarised vacuum of QFT, then it is possible to justify particle mass relationships; all observable particles apart from the electron have masses quantized as M=[electron mass].n(N+1)/(2.alpha) ~ 35n(N+1) Mev, where n is 1 for leptons, 2 for mesons and naturally 3 for baryons. N is also an integer, and takes values of ‘magic numbers’ of nuclear physics for relatively stable particles: for the muon (most stable particle after the neutron), N=2, for nucleons N=8, for the Tauon, N=50.

Hence, there’s a selection principle allowing masses of relatively stable particles to be deduced. Since the Higgs boson causes mass and may have a value like that of the Z boson, it’s interesting that [Z-boson mass]/(3/2 x 2.Pi x 137 x 137) = 0.51 Mev (electron mass), and [Z-boson mass]/(2.Pi x 137) ~ 105.7 MeV (muon mass). In an electron, the core must be quite distant from the particle giving the mass, so there are two separate vacuum polarisations between them, weakening the coupling to just alpha squared (and a geometrical factor). In the muon and all other particles than the electron, there is extra binding energy and so the core is closer to the mass-giving particle, hence only one vacuum polarisation separates them, so the coupling is alpha.

Remember that Schwinger’s coupling correction in QED increases Dirac’s magnetic moment of the electron to about 1 + alpha/(2.pi). When think outside the box, sometimes coincidences have a reason.

The electromagnetic force-carrying radiation is also the cause of gravity, via particles which cause the mass of charged elementary particles.The vacuum particles ("higgs particle") that give rise to all mass in the Standard Model haven't been observed officially yet, and the official prediction of the energy of the particle is very vague, similar to the Top Quark mass, 172 GeV. However, my argument is that the mass of the uncharged Z-boson, 91 GeV, determines the masses of all the other particles. It works. The charged cores of quarks, electrons, etc., couple up (strongly or weakly) with a discrete number of massive trapped Z-bosons which exist inthe vacuum. This mechanism also explains QED, such as the magnetic momentof the electron 1 + alpha/(2Pi) magnetons.

Literally, the electromagnetic force-causing radiation (vector bosons) interact with charged particle cores to produce EM forces, and with the associated "higgs bosons" (gravitationally self-trapped Z-bosons) to produce the correct inertial masses and gravity for each particle.

The lepton and hadron masses are quantized, and I've built a model, discussed there and on my blog, which takes this model and uses it to predict other things. I think this is what science is all about. The mainstream (string theory, cosmological constant fiddled cosmology) is too far out, and unable to make any useful predictions.

As for the continuum: the way to understand it is through correcting Maxwell's classical theory of the vacuum. Quantum field theory heuristically accounts for electrostatic (Coulomb) forces with a radiation-exchange mechanism. In the LeSage mechanism, the radiation causing Coulomb's law causes all forces by pushing. I worked out the mechanism by which electric forces operate in the April 2003 EW article; attraction occurs by mutual shielding as with gravity, but is stronger due to the sum of the charges in the universe. If you have a series of parallel capacitor plates with different charges, each separated by a vacuum dielectric, you need the total (net) voltage needs to take into account the orientation of the plates.

The vector sum is the same as a statistical random walk (drunkard's walk): the total is equal to the average voltage between a pair of plates, multiplied by the square root of the total number (this allows for the angular geometry dispersion, not distance, because the universe isspherically symmetrical around us - thank God for keeping the calculation very simple! - and there is as much dispersion outward in the random walk as there is inward, so the effects of inverse square law dispersions and concentrations with distance both exactly cancel out).

Gravity is the force that comes from a straight-line sum, which is the only other option than the random walk. In a straight line, the sum of charges is zero along any vector across the universe, if that line contains an average equal number of positive and negative charges. However, it is equally likely that the straight radial line drawn at random across the universe contains an odd number of charges, in which case the average chargeis 2 units (2 units is equal to the difference between 1 negative charge and1 positive charge). Therefore, the straight line sum has two options only, each with 50% probability: even number of charges and hence zero net result, and odd number of charges which gives 2 unit charges as the net sum. The mean for the two options is simply (0 + 2) /2 = 1 unit. Hence, electromagnetism is the square root of the number of charges in the universe, times the weak option force (gravity).

Thus, electromagnetism and gravity are different ways that charges add up.Electric attraction is as stated, simply a mutual blocking of EM "vector boson" radiation by charges, like LeSage gravity. Electric repulsion is anexchange of radiation. The charges recoil apart because the underlying physics in an expanding universe (with "red-shifted" or at least reduced energy radiation pressing in from the outside, due to receding matter in thesurrounding universe) means their exchange of radiation results in recoil away from one another (imagine two people firing guns at each other, for a simple analogy; they would recoil apart).

Magnetic force is apparently, as Maxwell suggested, due to the spins of the vacuum particles, which line up.

Ivor Catt, who published in IEE Trans. EC-16 and IEE Proc. 83 and 87 evidence proving that electric energy charges a capacitor at light speed and can't slow down afterward (hence electric energy has light speed), is wondering whether to throw a celebration on 26/28 May 2006 to mark the most ignored paradigm-shift in history. Catt is discovered of so-called Theory C (no electric current), which is only true in a charged capacitor (or other static charge). However, Catt fails to acknowledge that his own evidence for a light speed (spin) electron is a massive advance. In the previous posts, I've quoted results from Drs. Thomas Love, Asim O. Baruk, and others showing that the principle of superposition (which is one argument for ignoring the reality of electron spin in quantum mechanics) is a mathematical falsehood resulting from a contradiction in the two versions of the Schroedinger equation (Dr Love's discovery), since you change equations when dealing with taking a measurement!

Hence, a causal model of spin, such as a loop of gravitationally self-trapped (i.e., black hole) Heaviside electric 'energy current' (the Heaviside vector, describing light speed electric energy in conductors - Heaviside worked on the Newcastle-Denmark Morse Code telegraph line in 1872), is the reality of the electron. You can get rid of the half-integer spin problem by having the transverse vector rotate half a turn during a revolution like the Moebius strip of geometry. It is possible for a person to be so skeptical that they won't listen to anything. Science has to give sensible reasons for dismissing evidence. An empirical model based on facts which predicts other things (gravitation, all forces, all particle masses) is scientific. String 'theory' isn't.

On the subject of drl versus cosmological constant: Dr Lunsford outlines problems in the 5-d Kaluza-Klein abstract (mathematical) unification of Maxwell's equations and GR, and Lunsford published it in published in Int. J. Theor. Phys., v 43 (2004), No. 1, pp.161-177. This peer-reviewed paper was submitted to arXiv.org but was removed from arXiv.org by censorship apparently since it investigated a 6-dimensional spacetime is not consistent with Witten’s speculative 10/11 dimensional M-theory. It is however on the CERN document server at

‘Gravitation and Electrodynamics over SO(3,3)’ on CERN document server, EXT-2003-090: ‘an approach to field theory is developed in which matter appears by interpreting source-free (homogeneous) fields over a 6-dimensional space of signature (3,3), as interacting (inhomogeneous) fields in spacetime. The extra dimensions are given a physical meaning as ‘coordinatized matter’. The inhomogeneous energy-momentum relations for the interacting fields in spacetime are automatically generated by the simple homogeneous relations in 6-D. We then develop a Weyl geometry over SO(3,3) as base, under which gravity and electromagnetism are essentially unified via an irreducible 6-calibration invariant Lagrange density and corresponding variation principle. The Einstein-Maxwell equations are shown to represent a low-order approximation, and the cosmological constant must vanish in order that this limit exist.’

It is obvious that there are 3 expanding spacetime dimensions describing the evolution of the big bang, and 3 contractable dimensions describing matter. Total: 6 distinguishable dimensions to deal with.

Lunsford begins with an enlightening overview of attempts to unify electromagnetism and gravitation:

‘The old goal of understanding the long-range forces on a common basis remains a compelling one. The classical attacks on this problem fell into four classes:

‘All these attempts failed. In one way or another, each is reducible and thus any unification achieved is purely formal. The Kaluza theory requires an ad hoc hypothesis about the metric in 5-D, and the unification is non-dynamical. As Pauli showed, any generally covariant theory may be cast in Kaluza’s form. The Einstein-Mayer theory is based on an asymmetric metric, and as with the theories based on asymmetric connection, is essentially algebraically reducible without additional, purely formal hypotheses.

‘Weyl’s theory, however, is based upon the simplest generalization of Riemannian geometry, in which both length and direction are non-transferable. It fails in its original form due to the non-existence of a simple, irreducible calibration invariant Lagrange density in 4-D. One might say that the theory is dynamically reducible. Moreover, the possible scalar densities lead to 4th order equations for the metric, which, even supposing physical solutions could be found, would be differentially reducible. Nevertheless the basic geometric conception is sound, and given a suitable Lagrangian and variational principle, leads almost uniquely to an essential unification of gravitation and electrodynamics with the required source fields and conservation laws.’ Again, the general concepts involved are very interesting: ‘from the current perspective, the Einstein-Maxwell equations are to be regarded as a first-order approximation to the full calibration-invariant system.

‘One striking feature of these equations that distinguishes them from Einstein’s equations is the absent gravitational constant – in fact the ratio of scalars in front of the energy tensor plays that role. This explains the odd role of G in general relativity and its scaling behaviour. The ratio has conformal weight 1 and so G has a natural dimensionfulness that prevents it from being a proper coupling constant – so the theory explains why general relativity, even in the linear approximation and the quantum theory built on it, cannot be regularised.’

A causal model for GR must separate out the description of matter from the expanding spacetime universe. Hence you have three expanding spacetime dimensions, but matter itself is not expanding, and is in fact contracted by the gravitational field, the source for which is vector boson radiation in QFT.

The CC is used to cancel out gravitational retardation of supernovae at long distances. You can get rid of the CC by taking the Hubble expansion as primitive, and gravity as a consequence of expansion in spacetime. Outward force f=ma=mc/(age of universe) => inward force (3rd law). The inward force according to the Standard Model possibilities of QFT, must be carried by vector boson radiation. So causal shielding (Lesage) gravity is a result of the expansion. Thus, quantum gravity and the CC problem dumped in one go.

I personally don't like this result, it would be more pleasing not to have to do battle with the mainstream over the CC, but frankly I don't see how an ad hoc model composed of 96% dark matter and dark energy, is defended to the point of absurdity by suppressing workable alternatives which are more realistic.

The same has happened in QFT due to strings. When I was last at university, I sent Stanley Brown, editor of Physical Review Letters my gravity idea, a really short concise paper, and he rejected it for being an "alternative" to string theory! I don't believe he even bothered to check it. I'd probably have done the same thing if I was flooded by nonsense ideas from outsiders, but it is a sad excuse

Lee Smolin, in starting with known facts of QFT and building GR from them, is an empiricist; contrasted to the complete speculation of string theorists.

We know some form of LQG spin foam vacuum is right, because vector bosons (1) convey force, and (2) have spin.

For comparison, nobody has evidence for superpartners, extra dimensions, or any given stringy theory.

Danny Lunsford unites Maxwell's equations and GR using a plausible treatment of spacetime where there exactly twice as many dimensions as observed, the extra dimensions describing non-expanding matter while the normal spacetime dimensions describe the expanding spacetime. Because the expanding BB spacetime is symmetrical around us, those three dimensions can be lumped together.

The problem is that the work by Smolin and Lunsford is difficult for the media to report, and is not encouraged by string theorists, who have too much power.

Re inflation: the observed CBR smoothness "problem" at 300,000 years (the very tiny size scale of the ripples across the sky) is only a problem for seeding galaxy formation in the mainstream paradigm for GR.

1. General relativity and elementary particles

‘… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that ‘flows’... A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Professor Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp. 89-90.

General relativity can be understood by the pressure of a perfect fluid spacetime, causing acceleration where masses block or reflect back the inward pressure. Objects at greater distances from us are further back in time.

Gravity effects travel with light speed, so there is a physical delay in the arrival of redshift information from stars. This ‘spacetime’ combination is distance = ct, where c is light speed and t is time past. The sun is seen 8.3 minutes in the past, the next star is 4.3 years in the past, and so on. Because recession will continue while light is travelling to us, it is correct to see the recession velocities in spacetime as increasing with times past, an acceleration outward in spacetime of a = c/t = (light speed)/(age of universe) = 6 x 10-10 ms-2.

We can then calculate the outward force of the big bang in spacetime, using Newton’s 2nd law, F = ma. We know the mass of the universe approximately by multiplying together the mass of a typical star, the number of stars per galaxy (1011), and number of galaxies in the universe (1011). This suggests that the outward force of the universe is 1042 Newtons.

Another more accurate way of calculating the effective mass of the universe is to take the average local density of space for times close to the present, and to multiply that by the volume of the universe for a radius of ct where t is the age of the universe, and also by a correction factor to allow for the higher averaged density at times in the past.

By Newton’s 3rd law of motion, this outward force has an equal inward reaction, which in quantum field theory can be carried by light speed ‘gauge boson’ gravity causing radiation. Within the double cone shape illustrated above, the presence of mass M affects the spacetime pressure on the observer m. The inward arrows from outside the double cone all cancel each other out completely (as far as the observer m is concerned). But in the double cone region, the pressure from the left hand side is stopped (since M reflects it back), so the pressure from the right hand side (within the cone) pushes the observer m towards mass M. The resulting force of gravity can be calculated from geometry, because the net force is equal to total inward force times the proportion of that total which is covered by the cone on the right hand side. That proportion is simply the surface area of the base of the cone, divided into the surface area of a sphere with similar radius (R = ct). The inverse-square law is automatically introduced by the geometry which shows that the area of the base of the cone on the right hand side is equal to the area of the mass shield on the left hand side, multiplied by the ratio of (R/r)2, where r is the distance of mass M from observer m. Notice that there is no speculation involved; the facts are established!

Because of Drs Susskind and Witten, the media has let string theory go on without asking for definite testable predictions. I don’t think the layman public takes much notice of ‘theory’ it can’t understand. There are three types of not-yet-falsified theory:

3. Untestable/not falsifiable (over-hyped string theory’s vague landscape ‘predicting’ 10500 vacua, 10/11 dimensions, vague suggestions of superpartners without predicting their energy to show if they can be potentially checked or not, ‘prediction’ of unobservable gravitons without any testable predictions of gravity).

Dynamics of gravity

Gravity is the force of Feynman diagram gauge bosons coming from distances/times in the past. The Standard Model, the quantum field theory of electromagnetic and nuclear interactions which has made numerous well-checked predictions, forces arise by the exchange of gauge bosons. This is well known from the pictorial ‘Feynman diagrams’ of quantum field theory. Gravitation, as illustrated by this mechanism and proved below, is just this exchange process. Gauge bosons hit the mass and bounce back, like a reflection. This causes the contraction term of general relativity, a physical contraction of radius around a mass: (1/3)MG/c2 = 1.5 mm for Earth.

Mass (which by the well-checked equivalence principle of general relativity is identical for inertial and gravitational forces), arises not from the fundamental core particles of matter themselves, but by a miring effect of the spacetime fabric, the ‘Higgs bosons’. Forces are exchanges of gauge bosons: the pressure causes the cosmic expansion. The big bang observable in spacetime has speed from 0 to c with times past of 0 toward 15 billion years, giving outward force of F = ma = m.(variation in speeds from 0 to c)/(variation in times from 0 to age of universe) ~ 7 x 1043 Newtons. Newton’s 3rd law gives equal inward force, carried by gauge bosons, which are shielded by matter. The gauge bosons interact with uniform mass Higgs field particles, which do the shielding and have mass. Single free fundamental rest mass particles (electrons, positrons) can only associate with other particles by electromagnetism, which is largely shielded by the veil of polarised vacuum charges surrounding the fundamental particle core. Quarks only exist in pairs or triplets, so the fundamental particles are close enough that the intervening polarised vacuum shield effect is very weak, so they have stronger interactions.

Correcting the Hubble expansion parameter for spacetime: at present recession speeds are divided into observed distances, H = v/R. This is ambiguous for ignoring time! The distance R is increasing all the time, so is not time independent. To get a proper Hubble ‘constant’ therefore you need to replace distance with time t = R/c. This gives recession constant as v/t which equals v/t = v/(R/c) = vc/R = cH. So the correct spacetime formulation of the cosmological recession is v/t = cH = 6 x 10-10 ms-2. Outward acceleration! This means that the mass of the universe has a net outward force of F=ma = 7 x 1043 N. (Assuming that F=ma is not bogus!) Newton’s 3rd law says there is an implosion inward of the same force, 7 x 1043 N. (Assuming that Newton’s 3rd law is not bogus!) This

predicts gravity as the shielding of this inward force of gauge boson radiation to within existing data! (Assuming that the inward force is carried by the gauge bosons which cause gravity.)

Causal approach to loop quantum gravity (spin foam vacuum): volume contains matter and spacetime fabric, which behaves as the perfect fluid analogy to general relativity. As particles move in the spacetime fabric, it has to flow out of the way somewhere. It goes into the void behind the moving particle. Hence, the spacetime fabric filling a similar volume goes in the opposite direction to moving matter, filling in the void behind. Two analogies: (1) ‘holes’ in semoconductor electronics go the other way to electrons, and (2) a 70 litre person walking south along a corridor is matched by 70 litres of air moving north. At the end, the person is at the other end to the end he was in when he started, and 70 litres of air has moved up to fill in the space he vacated. Thus, simple logic and facts give us a quantitative and predictive calculating tool: an equal volume of the fluid goes in the opposite direction with the same motion, which allows the inward vacuum spacetime fabric pressure from the big bang to be calculated. This allows gravity to be estimated the same way, with the same result as the other method. Actually, boson radiations spend part of their existence as matter-antimatter pairs. So the two calculations do not duplicate each other. If the fraction due to radiation (boson) pressure is f, that due to perfect fluid pressure is 1-f. The total remains the same: (f) + (1 - f)= 1.

The net force is simply the proportion of the force from the projected cone (in the illustrations below), which is due to the asymmetry introduced by the effect of mass on the Higgs field (reflecting inward directed gauge bosons back). Outside the cone areas, the inward gauge boson force contributions are symmetrical from opposite directions around the observer, so those contributions all cancel out! This geometry predicts the strength of gravity very accurately…

There is strong evidence from electromagnetic theory that every fundamental particle has black-hole cross-sectional shield area for the fluid analogy of general relativity. (Discussed further on.)

The effective shielding radius of a black hole of mass M is equal to 2GM/c2. A shield, like the planet earth, is composed of very small, sub-atomic particles. The very small shielding area per particle means that there will be an insignificant chance of the fundamental particles within the earth ‘overlapping’ one another by being directly behind each other.

The total shield area is therefore directly proportional to the total mass: the total shield area is equal to the area of shielding by 1 fundamental particle, multiplied by the total number of particles. (Newton showed that a spherically symmetrical arrangement of masses, say in the earth, by the inverse-square gravity law is similar to the gravity from the same mass located at the centre, because the mass within a shell depends on its area and the square of its radius.) The earth’s mass in the standard model is due to particles associated with up and down quarks: the Higgs field.

The cross-sectional area of shield projected to radius R is equal to the area of the fundamental particle (

p multiplied by the square of the radius of the black hole of similar mass), multiplied by the (R/r)2 which is the inverse-square law for the geometry of the implosion. The total spherical area with radius R is simply four times p, multiplied by the square of R. Inserting simple Hubble law results c = RH and R/c = 1/H give us F = (4/3)pr G2M2/(Hr)2. We then set this equal to F=Ma and solve, getting G = (3/4)H2/(pr ). When the effect of the higher density in the universe at the great distance R is included, this becomes

If there were any other reason for gravity with similar accuracy, the strength of gravity would then be twice what we measure, so this is a firm testable prediction/confirmation that can be checked even more delicately as more evidence becomes available from current astronomy research…

Feynman discuss the LeSage gravity idea in ‘Character of Physical Law’ 1965 BBC lectures, with a diagram showing that if there is a pressure in space, shielding masses will create a net push. ‘If your paper isn’t read, they are ignorant of it. It isn’t even a put-down, just a fact.’ – my comment on Motl’s blog. The next comment was from Peter Woit: ‘in terms of experimentally checkable predictions, no one has made any especially significant ones since the standard model came together in 1973 with asymptotic freedom.’ Woit has

seen the censorship problem! Via the October 1996 Electronics Worldletters, this mechanism – which Dr Philip Campbell of Nature had said he was ‘not able’ to publish – correctly predicted that the universe would not be gravitationally decelerating. This was confirmed two years later experimentally by the discovery of Perlmutter, which Nature did publish, although it omitted to say that it had been predicted.

Air is flowing around you like a wave as you as you walk down a corridor (an equal volume goes in the other direction at the same speed, filling in the volume you are vacating as you move). It is not possible for the surrounding fluid to move in the same direction , or a void would form BEHIND and fluid pressure would continuously increase in FRONT until motion stopped. Therefore, an equal volume of the surrounding fluid moves in the opposite direction at the same speed, pemitting uniform motion to occur! Similarly, as fundamental particles move in space, a similar amount of mass-energy in the fabric of space (spin foam vacuum field) is displaced as a wave around the particles in the opposite direction, filling in the void volume being continuously vacated behind them. For the mass of the big bang, the mass-energy of Higgs/virtual particle field particles in the moving fabric of space is similar to the mass of the universe. As the big bang mass goes outward, the fabric of space goes inward around each fundamental particle, filling in the vacated volume. (This inward moving fabric of space exerts pressure, causing the force of gravity.)

‘Popular accounts, and even astronomers, talk about expanding space. But how is it possible for space … to expand? … ‘Good question,’ says [Steven] Weinberg. ‘The answer is: space does not expand. Cosmologists sometimes talk about expanding space – but they should know better.’ [Martin] Rees agrees wholeheartedly. ‘Expanding space is a very unhelpful concept’.’ – New Scientist, 17 April 1993, pp. 32-3.

= circumference divided by the diameter of a circle, approx. 3.14159265…

e = base of natural logarithms, approx. 2.718281828…

Mass continuity equation (for the galaxies in t

he space-time of the receding universe): dρ/dt + div.(ρv) = 0. Hence: dρ/dt = -div.(ρv). Now around us, dx = dy = dz = dr, where r is radius. Hence divergence (div) term is: -div.(ρv) = -3d(ρv)/dx. For spherical symmetry Hubble equation v = Hr. Hence dρ/dt = -div.(ρv) = -div.(ρHr) = -3d(ρHr)/dr= -3ρHdr/dr= -3ρH. So dρ/dt = -3ρH. Rearranging:-3Hdt = (1/ρ) dρ. Solving by integrating this gives say: -3Ht = (ln ρ1) – (ln ρ). Using the base of natural logarithms (e) to get rid of the ln’s: e-3Ht = density ratio. Because H = v/r = c/(radius of universe) = 1/(age of universe, t) = 1/t, we have: e-3Ht = (density ratio of current time to earlier, higher effective density) = e-3(1/t)t = e-3 = 1/20. All we are doing here is focussing on spacetime in which density rises back in time, but the outward motion or divergence of matter due to the Hubble expansion offsets this at great distances. So the effective density doesn’t become infinity, only e3 or 20 times the local density of the universe at the present time. The inward pressure of gauge bosons from greater distances initially rises because the density of the universe increases at earlier times, but then falls because of divergence, which causes energy reduction (like red-shift) of inward coming gauge bosons.

The proof [above] predicts gravity accurately, with G = ¾ H2/(

pr e3). Electromagnetic force (discussed in the April 2003 Electronics World article, reprinted below with links to illustrations) in quantum field theory (QFT) is due to ‘virtual photons’ which cannot be seen except via forces produced. The mechanism is continuous radiation from spinning charges; the centripetal acceleration of a = v2/r causes the emission energy emission which is naturally in exchange equilibrium between all similar charges, like the exchange of quantum radiation at constant temperature. This exchange causes a ‘repulsion’ force between similar charges, due to recoiling apart as they exchange energy (two people firing guns at each other recoil apart). In addition, an ‘attraction’ force occurs between opposite charges that block energy exchange, and are pushed together by energy being received in other directions (shielding-type attraction). The attraction and repulsion forces are equal for similar net charges (as proved in the April 2003 Electronics World article reprinted below). The net inward radiation pressure that drives electromagnetism is similar to gravity, but the addition is different. The electric potential adds up with the number of charged particles, but only in a diffuse scattering type way like a drunkards walk, because straight-line additions are cancelled out by the random distribution of equal numbers of positive and negative charge. The addition only occurs between similar charges, and is cancelled out on any straight line through the universe. The correct summation is therefore statistically equal to the square root of the number of charges of either sign multiplied by the gravity force proved above.

Hence F(electromagnetism) = mMGN1/2/r2 = q1q2/(4

pe r2) (Coulomb’s law)

where G = ¾ H2/(

pr e3) as proved above, and N is as a first approximation the mass of the universe (4p R3r /3= 4p (c/H)3r /3) divided by the mass of a hydrogen atom. This assumes that the universe is hydrogen. In fact it is 90% hydrogen by atomic abundance as a whole, although less near stars (only 70% of the solar system is hydrogen, due to fusion of hydrogen into helium, etc.). Another problem with this way of calculating N is that we assume the fundamental charges to be electrons and protons, when in fact protons contain two up quarks (each +2/3) and one downquark (-1/3), so there are twice as many fundamental particles. However, the quarks remain close together inside a nucleon and behave for most electromagnetic purposes as a single fundamental charge. With these approximations, the formulae above yield a prediction of the strength factor e in Coulomb’s law of:

e

= qe2e2.7…3 [r /(12p me2mprotonHc3)]1/2 F/m.

Testing this with the PRL and other data used above (

r = 4.7 x 10-28 kg/m3 and H = 1.62 x 10-18 s-1 for 50 km.s-1Mpc-1), gives e = 7.4 x 10-12 F/m which is only 17% low as compared to the measured value of 8.85419 x 10-12 F/m. This relatively small error reflects the hydrogen assumption and quark effect. Rearranging this formula to yield r , and rearranging also G = ¾ H2/(pr e3) to yield r allows us to set both results for r equal and thus to isolate a prediction for H, which can then be substituted into G = ¾ H2/(pr e3) to give a prediction for r which is independent of H:

Again, these predictions of the Hubble constant and the density of the universe from the force mechanisms assume that the universe is made of hydrogen, and so are first approximations. However they clearly show the power of this mechanism-based predictive method.

The remainder of this site is totally unedited, and is simply copied from an older version. It contains useful ideas, extracts, etc., but in no way is designed to go directly into a paper or a book. It is a compilation of useful bits and pieces.

The capacitor QFT model in detail:

At every instant, you have a vector sum of electric fields possible across the universe.

The fields are physically propagated by gauge boson exchange. The gauge bosons must travel between all charges, they can't tell that an atom is "neutral" as a whole, they just travel between the charges.

Therefore even though the electric dipole created by the separation of the electron from the proton in a hydrogen atom at any instant is randomly orientated, the gauge bosons can also be considered to be doing a random walk between all the charges in the universe.

The random-walk vector sum for the charges of all the hydrogen atoms is the voltage for a single hydrogen atom (the real charges mass in the universe is something like 90% composed of hydrogen), multiplied by the square root of the number of atoms in the universe.

This allows for the angles of each atom being random. If you have a large row of charged capacitors randomly aligned in a series circuit, the average voltage resulting is obviously zero, because you have the same number of positive terminals facing one way as the other.

So there is a lot of inefficiency, but in a two or three dimensional set up, a drunk taking an equal number of steps in each direction does make progress. The taking 1 step per second, he goes an average net distance from the starting point of t^0.5 steps after t seconds.

For air molecules, the same occurs so instead of staying in the same average position after a lot of impacts, they do diffuse gradually away from their starting points.

Anyway, for the electric charges comprising the hydrogen and other atoms of the universe, each atom is a randomly aligned charged capacitor at any instant of time.

This means that the gauge boson radiation being exchanged between charges to give electromagnetic forces in Yang-Mills theory will have the drunkard’s walk effect, and you get a net electromagnetic field of the charge of a single atom multiplied by the square root of the total number in the universe.

Now, if gravity is to be unified with electromagnetism (also basically a long range, inverse square law force, unlike the short ranged nuclear forces), and if gravity due to a geometric shadowing effect (see my home page for the Yang-Mills LeSage quantum gravity mechanism with predictions), it will depend on only a straight line charge summation.

In an imaginary straight line across the universe (forget about gravity curving geodesics, since I’m talking about a non-physical line for the purpose of working out gravity mechanism, not a result from gravity), there will be on average almost as many capacitors (hydrogen atoms) with the electron-proton dipole facing one way as the other,

but not quite the same numbers!

You find that statistically, a straight line across the universe is 50% likely to have an odd number of atoms falling along it, and 50% likely to have an even number of atoms falling along it.

Clearly, if the number is even, then on average there is zero net voltage. But in all the 50% of cases where there is an ODD number of atoms falling along the line, you do have a net voltage. The situation in this case is that the average net voltage is 0.5 times the net voltage of a single atom. This causes gravity.

The exact weakness of gravity as compared to electromagnetism is now predicted.

Gravity is due to 0.5 x the voltage of 1 hydrogen atom (a "charged capacitor").

Electromagnetism is due to the random walk vector sum between all charges in the universe, which comes to the voltage of 1 hydrogen atom (a "charged capacitor"), multiplied by the square root of the number of atoms in the universe.

Thus, ratio of gravity strength to electromagnetism strength between an electron and a proton is equal to: 0.5V/(V.N^0.5) = 0.5/N^0.5.

V is the voltage of a hydrogen atom (charged capacitor in effect) and N is the number of atoms in the universe. This ratio is equal to 10^-40 or so, which is the correct figure within the experimental errors involved

OLDER MATERIAL FOLLOWS:

Heuristically, gauge boson (virtual photon) transfer between charges to cause electromagnetic forces, and those gauge bosons don’t discriminate against charges in neutral groups like atoms and neutrons. The Feynman diagrams show no way for the gauge bosons/virtual photons to stop interactions. Light then arises when the normal exchange of gauge bosons is upset from its equilibrium. You can test this heuristic model in some ways. First, most gauge bosons are going to be exchanged in a random way between charges, which means the simple electric analogue is a series of randomly connected charged capacitors (positive and negative charges, with vacuum 377-ohm dielectric between the ‘plates’). Statistically, if you connect an even number of charged capacitors in random along a line across the universe, the sum will be on average be zero. But if you have an odd number, you get an average of 1 capacitor unit. On average any line across the universe will be as likely to have an even as an odd number of charges, so the average charge sum will be the mean, (0 +1)/2 = 1/2 capacitor. This is weak and always attractive, because there is no force at all in the sum = 0 case and attractive force (between oppositely charged capacitor plates) in the sum = 1 case. Because it is weak and always attractive, it's gravitation? The other way they charges can add is in a perfect summation where every charge in the universe appears in the series + - + -, etc. This looks improbable, but is statistically a drunkard's walk, and by the nature of path-integrals gauge bosons do take every possible route, so it WILL happen. When capacitors are arranged like this, the potential adds like a statistical drunkard's walk because of the random orientation of ‘capacitors’, the diffusion weakening the summation from the total number to just the square root of that number because of the angular variations (two steps in opposite directions cancel out, as does the voltage from two charged capacitors facing one another). This vector sum of a drunkard's walk is the average step times the square root of the number of steps, so for ~1080 charges, you get a resultant of ~1040. The ratio of electromagnetism to gravity is then (~1040) /(1/2). Notice that this model shows gravity is electromagnetism, caused by gauge bosons. It does away with gravitons. The distances between the charges are ignored. This is explained because on average half the gauge bosons will be going away from the observer, and half will be approaching the observer. The fall due to the spread over larger areas with divergence is offset by the concentration due to convergence.

ALL electrons are emitting, so all are receiving. Hence they don't slow, they just get juggled around and obey the chaotic Schrodinger wave formula instead of a classical Bohr orbit.

‘Arguments’ against the facts of emission without net energy loss also ‘disprove’ real heat theory. According to the false claim that radiation leads to net energy loss,
because everything is emitted heat radiation (separately from force causing radiation), everything should quickly cool to absolute zero. This is wrong for the same reason above: if everything is emitting heat, you can have equilibrium, constant temperature.

The equation is identical to Coulomb's law except that it expresses the
force in terms of different measurables. This allows it to predict the
permittivity of free space, the electric constant in Coulomb's law. So it
is a correct, predictive scientific mechanism.

The concepts of "electric charge" and "electric field" are useful words but
are physically abstract, not directly observable: you measure "them"
indirectly by the forces they produce, and you assume that because the mass
of the electron is quantized and the charge/mass ratio only varies with the
velocity of the electron by Lorentz/Einstein's law, charge is fundamental.
Really, energy is fundamental and the amount of "electric charge" you see
depends on how much attenuation there is by the polarised vacuum, the
observed (attenuated) charge falling by 7% at 90 GeV collisions (Koltick,
PRL, 1997), and mass varies because it is due to the surrounding The forces
are actually caused by vector radiation exchange. This is established by
quantum field theory.

If you have a series of parallel capacitor plates with different
charges, each separated by a vacuum dielectric, you need the total (net)
voltage needs to take into account the orientation of the plates.

The vector sum is the same as a statistical random walk (drunkard's walk):
the total is equal to the average voltage between a pair of plates,
multiplied by the square root of the total number (this allows for the
angular geometry dispersion, not distance, because the universe is
spherically symmetrical around us - thank God for keeping the calculation
very simple! - and there is as much dispersion outward in the random walk as
there is inward, so the effects of inverse square law dispersions and
concentrations with distance both exactly cancel out).

Gravity is the force that comes from a straight-line sum, which is the only
other option than the random walk. In a straight line, the sum of charges
is zero along any vector across the universe, if that line contains an
average equal number of positive and negative charges. However, it is
equally likely that the straight radial line drawn at random across the
universe contains an odd number of charges, in which case the average charge
is 2 units (2 units is equal to the difference between 1 negative charge and
1 positive charge). Therefore the straight line sum has two options only,
each with 50% probability: even number of charges and hence zero net result,
and odd number of charges which gives 2 unit charges as the net sum. The
mean for the two options is simply (0 + 2) /2 = 1 unit. Hence
electromagnetism is the square root of the number of charges in the
universe, times the weak option force (gravity).

Thus, electromagnetism and gravity are different ways that charges add up.

On the small positive value of the CC see Phil Anderson’s comment on cosmic variance:

What value of the CC does this predict quantitatively? Answer: the expansion rate without gravitational retardation is just Hubble’s law, which predicts the observed result to within experimental error! Hence the equivalent CC is predicted ACCURATELY.

(Although Anderson’s argument is that no real CC actually exists, a pseudo-CC must be fabricated to fit observation if you FALSELY assume that there is a gravitational retardation of supernovae naively given by Einstein’s field equation).

Theoretical explanation: if gravity is due to momentum from gauge boson radiation exchanged from the mass in the expanding universe surrounding the observer, then in the observer’s frame of reference a distant receding supernova geometrically can’t be retarded much.

The emphasis on theoretical predictions is important. I've shown that the correct quantum gravity dynamics (which predict G accurately) give the effective or AVERAGE radius of the gravity-causing mass around us (ie, the average range of the receding mass which is causing the gravity we experience) is (1 - e^-1)ct ~ 0.632ct ~ 10,000 million light-years in distance, where t is age of universe. Hence a supernova which is that distance from us, approximately 10,000 million light years away, is not affected at all by gravitational retardation (deceleration), as far as we - as observers of its motion - are concerned. (Half of the gravity-causing mass of the universe - as far as our frame of reference is concerned - is within a radius of 0.632ct of us, and half is beyond that radius. Hence the net exchange radiation gravity at that distance is zero. This calculation already has the red-shift correction built into it, since it is used to determine the 0.632ct effective radius.) This model in a basic form was predicted in 1996, two years before supernovae data confirmed it. Alas, bigots suppressed it, although it was sold via the October 1996 issue of Electronics World magazine.

This is a very important difference between the proper mechanism of gravity, which predicts the Einstein's field equation (Newton's law written in tensor notation with spacetime and a contraction term to keep the mass-energy conservation accurate) as a physical effect of exchange radiation caused gravitation!

The Standard Model tells us gravity and electromagnetic forces are caused by light speed exchange radiation. Particles exchange the radiation and recoil apart. This process is like radiation being reflected by the mass carriers in the vacuum with which charged particles (electrons, quarks, etc.) associate. The curvature of spacetime is caused physically by this process..

Radiation pressure causes gravity, contraction in general relativity, and other forces (see below) in addition to avoiding the dark matter problem. The Standard Model is the best-tested physical theory in history: forces are due to Feynman-diagram radiation exchange in spacetime. There are 3 expanding spacetime dimensions in the big bang universe which describe the universe on a large scale, and 3 contractable dimensions of matter which we see on a small scale.

Force strengths, nuclear particle masses and elimination of dark matter and energy by a mechanism of the Standard Model, using only established widely accepted, peer-reviewed facts published in Physical Review Letters.

High energy unification just implies unification of forces at small distances, because particles approach closer when collided at high energy. So really unification at extremely high energy is suggesting that even at low energy, forces unify at very small distances.

There’s empirical evidence that the strong force becomes weaker at higher energies (shorter distances) and the electroweak force becomes stronger (electric charge between electrons is 7% stronger when they’re collided at 90 GeV), there is likely some kind of unified force near the bare core of a quark.

As you move away from the core, the intervening polarised vacuum shields the bare core electric charge by a factor that seemingly increasing toward 137, and the strong force falls because it mediated by massive gluons which are short ranged.

If you consider energy conservation of the vector bosons (photons, Z, W+, W- and gluons), you would logically expect quantitative force unification where you are near the bare charge core: the QFT prediction (which doesn’t predict unification unless you have SUSY) seems to neglect this in producing a prediction that electric charge increases as a weak function (some logarithm) of nteraction energy.

The reason why energy conservation will produce unification is this: the increasing force of electroweak interactions with increased energy (or smaller distances from the particle core) implies more energy is in the gauge bosons delivering the momentum which produces the forces. This increased gauge boson energy is completely distinct from the kinetic energy (although a moving mass gains mass-energy by the Lorentz effect, it does not gain any charge by this route).

Where does the extra gauge boson energy that increases the effective charge come from when you get closer to a particle core? Answer: the fall in the strong force around a quark core as distance decreases implies a decrease in the amount of energy in short-ranged strong force gauge bosons (gluons). The fall in the energy available in gluons, by conservation of energy, is what is powering the increase in the energy of electroweak gauge bosons at short ranges.

You can’t physically have a fraction of a polarised vacuum charge pair between you and the quark core! Hence the usual mathematical model which doesn't allow for the discreteness, i.e., the fact that you need at least one pair of vacuum charges between you and the quark core for there to be any shielding at all, is the error in the mainstream QFT treatment which, together with avoiding the conservation of energy for gauge bosons, creates the unification 'problem' which is then falsely 'fixed' by the speculative introduction of SUSY (supersymmetry, a new unobserved particle for every onserved particle).

'The solution to the difficulty of negative energy states [in relativistic quantum mechanics] is due to Dirac [P. A. M. Dirac, Proc. Roy. Soc. (London), A126, p360, 1930]. One defines the vacuum to consist of no occupied positive energy states and all negative energy states completely filled. This means that each negative energy state contains two electrons. An electron therefore is a particle in a positive energy state with all negative energy states occupied. No transitions to these states can occur because of the Pauli principle. The interpretation of a single unoccupied negative energy state is then a particle with positive energy ... The theory therefore predicts the existence of a particle, the positron, with the same mass and opposite charge as compared to an electron. It is well known that this particle was discovered in 1932 by Anderson [C. D. Anderson, Phys. Rev., 43, p491, 1933].

'Although the prediction of the positron is certainly a brilliant success of the Dirac theory, some rather formidable questions still arise. With a completely filled 'negative energy sea' the complete theory (hole theory) can no longer be a single-particle theory.

'The treatment of the problems of electrodynamics is seriously complicated by the requisite elaborate structure of the vacuum. The filled negative energy states need produce no observable electric field. However, if an external field is present the shift in the negative energy states produces a polarisation of the vacuum and, according to the theory, this polarisation is infinite.

'In a similar way, it can be shown that an electron acquires infinite inertia (self-energy) by the coupling with the electromagnetic field which permits emission and absorption of virtual quanta. More recent developments show that these infinities, while undesirable, are removable in the sense that they do not contribute to observed results [J. Schwinger, Phys. Rev., 74, p1439, 1948, and 75, p651, 1949; S. Tomonaga, Prog. Theoret. Phys. (Kyoto), 1, p27, 1949].

'For example, it can be shown that starting with the parameters e and m for a bare Dirac particle, the effect of the 'crowded' vacuum is to change these to new constants e' and m', which must be identified with the observed charge and mass. ... If these contributions were cut off in any reasonable manner, m' - m and e' - e would be of order alpha ~ 1/137. No rigorous justification for such a cut-off has yet been proposed.

'All this means that the present theory of electrons and fields is not complete. ... The particles ... are treated as 'bare' particles. For problems involving electromagnetic field coupling this approximation will result in an error of order alpha. As an example ... the Dirac theory predicts a magnetic moment of mu = mu[zero] for the electron, whereas a more complete treatment [including Schwinger's coupling correction, i.e., the first Feynman diagram] of radiative effects gives mu = mu[zero].(1 + alpha/{twice Pi}), which agrees very well with the very accurate measured value of mu/mu[zero] = 1.001...'

VACUUM POLARISATION, AN EMPIRICALLY DEFENDABLE FACT

What is the mechanism for the empirically confirmed 7% increase in electric charge as energy increases to 90 GeV (PRL, v.78, 1997, no.3, p.424)?

There is something soothing in the classical analogy to a sea of charge polarising around article cores and causing shielding.

While the electroweak forces increase with interaction energy (and thus increase closer to a particle core), data on the string force shows it falls from alpha =1 at low energy to ~0.15 at 90 GeV.

If you think about the conservation gauge boson energy, the true (core) charge is independent of the interaction energy (although the mass rises with velocity), so the only physical mechanism by which the electroweak forces can increase as you approach the core is by gaining energy from the gluon field which has a falling alpha closer to the core.

If true, then this logic dispenses with SUSY, because perfect unification due to energy conservation will be reached at extremely high energy when the polarised vacuum is penetrated.

RECAP

Strong force coupling constant decreases from 1 at low energy to 0.15 at about 90 GeV, and over this same range the electromagnetic force coupling increases from 1/137 to 1/128.5 or so.

This is empirical data. The 7 % rise in electric charge as interaction energy rises to 90 GeV (Levine et al., PRL, v.78, 1997, no.3, p.424) is the charge polarisation (core shield) being broken through as the particles approach very closely.

This experiment used electrons not quarks because of course you can’t make free quarks. But quarks have electric charge as well as colour charge so the general principle will apply that the electric charge of a quark will increase by about 7% as energy increases to 90 Gev, while the strong nuclear (colour) charge will fall by 85%.

So what is it that supplies the additional energy for the field mediators? It must be the decrease in the strength of the short-range strong force in the case of a quark.

Get close to a quark (inside the vacuum polarisation veil) and the electric charge increases toward the bare core value, while the colour charge diminishes (allowing asymptotic freedom over a the size of nucleon). The energy to drive the electromagnetic coupling increase must come from the gluon field, because there is simply nowhere else for it to come from. If so then when the polarised vacuum around the core is fully penetrated, the strong force will be in equilibrium with the electroweak force and there will be unification without SUSY.

Galactic rotation rates and other popular controversies

I've complained bitterly about Jeremy Webb before. He is editor of New Scientist, and regularly publishes articles which falsely make comments to the effect that 'nobody has predicted gravity, its mechanism is not understood by anyone (and being a scientist, I'm not armwaving here, I've personally interviewed all of the billions of people in the world) ...'

These comments appear in articles discussing the fundamental forces, string theory, etc. The latest was an editorial on page 5 of the 29 April 2006 issue, which is unsigned but may be by him. The first email he sent me was on a Monday or Tuesday evening in December 2002 (I can search it out if needed), and complained that he had to write the editorial for the following morning. (The second email, a few months later, from him complained that he had just returned from holiday and was therefore not refreshed and able to send me a reply to my enquiry letter...)

Anyway, in the editorial he (or whoever he gets to do his work for him should he have been on holiday again, which may well be the case) writes:

'The most that can be said for a physical law is that it is a hypothesis that has been confirmed by experiment so many times that it becomes universally accepted. There is nothing natural about it, however: it is a wholly human construct. Yet we still baulk when somebody tries to revoke one.'

This is very poorly written. Firstly, mathematically based laws can be natural (Feynman argued that physical laws have a naturally beautiful simplicity, and people such as Wigner argued - less convincingly - that because Pi occurs in some geometric integrals relating to natural probability, the mathematics is natural, and the universe is based on mathematics rather than being merely incompletely modelled by it in some quantitative aspects depending on whether you consider string theory to be pseudoscience or genius).

Secondly, 'a miss is as near as a mile': even if science is about falsifying well established and widely accepted facts (such activity is deemed crackpot territory according to John Baez and many other mainstream scientists), then failing to produce the results required, failing to deliver the goods is hardly exciting. If someone tries to revoke a law and doesn't succeed, they don't get treated in the way Sir Karl Popper claimed they do. Popper claimed basically that 'science proceeds by falsification, not by proof', which is contrary to Archimedes' proofs of the laws of buoyancy and so on. Popper was seriously confused, because nobody has won a mainstream prize for just falsifying an established theory. Science simply is not done that way. Science proceeds constructively, by doing work. The New Scientist editorial proceeds:

'That is what is happening to the inverse-square law at the heart of Newton's law of gravitation. ... The trouble is that this relationship fails for stars at the outer reaches of galaxies, whose orbits suggest some extra pull towards the galactic centre. It was to explain this discrepancy that dark matter was conjured up [by Fritz Zwicky in 1933], but with dark matter still elusive, another potential solution is looking increasingly attractive: change the law.'

This changed law programme is called 'MOND: modified Newtonian dynamics'. It is now ten years since I first wrote up the gravity mechanism in a long paper. General relativity in the usual cosmological solution gives

a/R = (1/3)('cosmological constant', if any) - (4/3)Pi.G(rho + 3p)

where a is the acceleration of the universe, R is the radius, rho is the density and p is the pressure contribution to expansion: p = 0 for non-relativistic matter; p = rho.(c^2)/3 for relativistic matter (such as energetic particles travelling at velocities approaching c in the earliest times in the big bang). Negative pressure produces accelerated expansion.

The Hubble constant, H, more correctly termed the Hubble 'parameter' (the expansion rate evolves with time and only appears a constant because we are seeing the past with time as we look to greater distances) in this model is

where k is the geometry of the expansion curve for the universe (k = -1, 0, or +1; WMAP data shows k ~ 0, in general relativity jargon this is a very 'flat' geometry of spacetime) and X is the radius of the curvature of spacetime, i.e., simply the radius of a circle that a photon of light would travel around the universe due to being trapped by gravity (this is the geodesic).

Because the cosmological constant and the third term on the right hand side are generally negligible (especially if exponential inflation occurs at the earliest absolute time in the expansion of the universe), the gives the usual Friedmann prediction for density is approximately:

Density, rho = (3/8)(H^2)/(Pi.G).

This is the actual density for the WMAP observations of a flat spacetime. This formula is over-estimates the observed density of discovered matter in the universe by an order of magnitude!

The gravity mechanism I came up with and which was first written up about 10 years ago today, and first published via the October 1996 letters pages of Electronics World, gives a different formula, which - unlike the mainstream equation above - makes the right predictions:

Density, rho = (3/4)(H^2)/(Pi.G.e^3).

It also predicted in 1996 that the universe is not slowing down, a prediction confirmed by observation in 1998!

Normally in science you hear people saying that the one thing which is impressive is good predictions! However, my work was simply suppressed by Nature and other journals and the new 1998 observations were taken into cosmology by a mechanism-less fudge or arbitrary adjustment of the equations to artificially force the model to fit the new facts! This is called the Lambda-CDM (Lambda-Cold Dark Model) and disproves Kuhn's concept of scientific revolutions. Faced with a simple mechanism, they prefer to ignore it and go for a crackpot approach based on faith in the unseen, unobservable, unpredictable type(s) of 'dark matter' which is really contrary to science.

If they had a prediction of the properties of this matter, so that there was a possibility of learning something, then it would at least have a chance of being tested. As it is, it cannot be falsified and by Popper's nonsense it is simply not science by their own definition (it is the mainstream, not me, that cling's on to Popper's criterion when repeatedly ignoring my submissions and refuse to read and peer-review my proof of the mechanism and correct equations; so their hypocrisy is funny).

There is no mechanism possible to hound these pseudosciences either out of control science, or into taking responsibility for investigating the facts. The situation is like that of the mountaineer who reaches the summit for the first time ever in history. He takes photos to prove it. He returns and stakes his claim in the records. The reactions are as follows:

1. 'Show your proof to someone else, I'm not interested in it.'
2. 'The editor won't print this sort of nonsense.'
3. 'Anybody can climb any mountain, so who cares?'
4. 'Feynman once said nobody knows how to climb that mountain, so there!'

No individual has the power to publish the facts. The closest I came was email dialogue with Stanley Brown, editor of Physical Review Letters. On the one hand I'm disgusted by what has happened, but on the other hand it was to be expected. There is a danger that too much of the work is being done by me: in 1996 my paper was mainly an outline of the proofs, but today it has hardened into more rigorous mathematical proof. If it had been published in Nature even as just a very brief letter in 1996, then the editor of Nature would have allowed other people to work on the subject, by providing a forum for discussion. Some mainstream mathematical physicists could have worked on it and taken the credit for developing it into a viable theory. Instead, I'm doing all the work myself because that forum was denied. This is annoying because it is not the community spirit of a physics group which was I enjoyed at university.

I've repeatedly tried to get others interested via internet Physics Forums and similar, where the reaction has been abusive and downright fascist. Hitting back with equal force, which the editor of Electronics World - Phil Reed - suggested to me as a tactic in 2004, on the internet just resulted in my proof being dishonestly called speculation and my being banned from responding. It is identical to the Iraqi 'War', a fighting of general insurgency which is like trying to cut soup with a knife - impossible. Just as in such political situations, there is prejudice which prevents reasoned settlement of differences. People led by Dr Lubos Motl and others have, by and large, made up their minds that string theory settles gravity, and the Lambda CDM model settles cosmology, and that unobserved 10/11 dimensional spacetime, unobserved superpartners for every visible partner, and unobserved dark energy and dark matter is 90% or more of the universe. To prove contrary facts is like starting a religious heresy: people are more concerned with punishing you for the heresy than listening and being reasonable.

If you have a particular identifiable opponent, then you know who to fight to get things moving. But since there is a general fascist-style bigotry to contend with, everyone is basically the enemy for one reason or more: the substance of your scientific work is ignored and personal abuse against you is directed from many quarters, from people who don't know you personally.

Because this can be plotted out into a wavy curve of wave amplitude versus effective frequency, the standard dark matter model was often presented as a unique fit to the observations. However, in 2005 Constantinos Skordis used relativistic MOND to show that it produced a similar fit to the observations! (This is published on page 54 of the 29 April 2006 issue of New Scientist.) Doubtless other models with parameters that offset those of the mainstream will also produces a similarly good fit (especially after receiving the same anount of attention and funding that the mainstream has had all these years!).

The important thing here is that the most impressive claims for uniqueness of the arbitrary fudge Lambda CDM model are fraudulent. Just because the uniqueness claims are fraudulent, does not in itself prove that another given model is right because the other models are obviously then not unique in themselves either. But it should make respectable journal editors more prone to publish the facts!

Exact statement Heisenberg's uncertainty principle

I've received some email requests for a clarification of the exact statement of Heisenberg's uncertainty principle. If uncertainty in distance can occur in two different directions, then the uncertainty is is only half of what it would be if it can only occur in one direction. If x is uncertainty in distance and p is uncertainty in momentum, then xp is at least h bar providing that x is always positive. If distance can be positive as well as negative, then the uncertainty is half h bar. The uncertainty principle takes on different forms depending on the situation which is under consideration.

2. A theory that predicts something that is not already known is speculative rubbish

;-)

Wolfgang Pauli’s letter of Dec 4, 1930 to a meeting of beta radiation specialists in Tubingen:

‘Dear Radioactive Ladies and Gentlemen, I have hit upon a desperate remedy regarding ... the continuous beta-spectrum ... I admit that my way out may seem rather improbable a priori ... Nevertheless, if you don’t play you can’t win ... Therefore, Dear Radioactives, test and judge.’

Pauli's neutrino was introduced to maintain conservation of energy in observed beta spectra where there is an invisible energy loss. It made definite predictions. Contrast this to stringy SUSY today.

Although Pauli wanted renormalization in all field theories and used this to block non-renormalizable theories, others like Dirac opposed renormalization to the bitter end, even in QED where it was empirically successful! (See Chris Oakley’s home page.)

Unlike mainstream people (such as stringy theorists Professor Jacques Distler, Lubos Motl, and other loud 'crackpots'), I don't hold on to any concepts in physics as religious creed. When I state big bang, I'm referring only to those few pieces of evidence which are secure, not to all the speculative conjectures which are usually glued to it by the high priests.

For example, inflationary models are speculative conjectures, as is the current mainstream mathematical description, the Lambda CDM (Lambda-Cold Dark Matter) model:

That model is false as it fits the general relativity field equation to observations by adding unobservables (dark energy, dark matter) which would imply that 96 % of the universe is invisible; and this 96 % is nothing to do with spacetime fabric or aether, it is supposed to speed up expansion to fit observation and bring the matter up to critical density.

The 'acceleration' of the universe observed is the failure of the 1998 mainstream model of cosmology to predict the lack of gravitational retardation in the expansion of distant matter (observations were of distant supernovae, using automated CCD telescope searches).

Two years before the first experimental results proved it, back in October 1996, Electronics World published (via the letters column) my paper which showed that the big bang prediction was false, because there is no gravitational retardation of distant receding matter. This prediction comes very simply from the pushing mechanism for gravity in the receding universe context.

The current cosmological model is like Ptolemies work. Ptolemy’s epicycles are also the equivalent of ‘string theory’ (which doesn’t exist as a theory, just as a lot of incompatible speculations with different numbers of ‘branes’ and different ways to account for gravity strength without dealing with it quantitatively) - they were ad hoc and did not predict anything. It simply had no mechanism, and ‘predictions’ don’t include data put into it (astronomical observations). Mine does:

http://feynman137.tripod.com/ This has a mechanism and many predictions. The most important prediction is the paper published via the Oct. 1996 issue of Electronics World, that there is no slowing down. This was discovered two years later, but the good people at Nature, CQG, PRL, etc., suppressed both the original prediction and the fact that the experimental confirmation confirms it. Instead they in 1998 changed cosmology to the lambda (cosmological constant) dark energy model, adding epicycles with no mechanism. The ‘arbitrary constants’ of the Standard Model; notice I predict these so they are no longer arbitrary. The world is full of bigotry and believes that religious assertions and political mainstream group-think are a substitute science. Personal relationships are a kind of substitute for science: they think science is a tea party, or a gentleman's club. This is how the mainstream is run. "You pat my back and I'll pat yours". Cozy, no doubt. But it isn't science.

Caloric and phlogiston were replaced by two mechanisms and many laws of thermodynamics. The crucial steps were:

(1) discovery of oxygen (proof of correct facts)
(2) Prevost's key discovery of 1792 that constant temperature is possible even if everything is always emitting heat (exchanging energy at the same rate). This was equilibrium theory, permitting the kinetic theory and radiation theory (proof of correct facts)

How will string theory and the dark energy cosmology be replaced? By you pointing out that gravity is already in the Standard Model as proved on this page. But be careful: nobody will listen to you if you have nothing at stake, and if you have a career in science you will be fired. So try to sabotage the mainstream bigots subversively.

Gravity is a residual of the electromagnetic force. If I have two hydrogen atoms a mile apart, they are exchanging radiation, because the electron doesn't stop doing this just because there is a proton nearby, and vice versa. There is no mechanism for the charges a neutral atom to stop exchanging radiation with other charges in the surrounding universe; it is neutral because the attractive-repulsive Coulomb force is cancelled out by the two exchanges, not because the exchanges suddenly stop when an electron and a proton form an "uncharged" atom. This is fact: if you dispute it you must supply a mechanism which stops exchange force-causing radiation from occurring when an electron is near a proton.

The addition of "charged capacitors" which are overall "neutral" (ie charged atoms) in space can take two different routes with severely different net voltage. A straight line across the universe encounters randomly orientated atoms, so if there is an even number of atoms the average net voltage will be zero like a circuit with an equal number of charged capacitors pointed both ways in series. 50% of such lines are even numbers of atoms, and 50% are odd. This is all simple fact from simple statistics, not speculation. Learn it at kindergarden. The 50% of lines across the universe which have an odd number of randomly orientated atoms in series series will be have a voltage equal to that from a single charged atom.

The mean voltage is then [(odd) + (even)]/2 = [(0) + (1)]/2 = 1/2 atom voltage = 1electron or proton unit of charge. This force, because it always results from the odd atom (where there is always attraction) is always attractive.

Now the sum for the other network of charges in the universe is the random walk between all charges all over space (counting each charge once only), which statistically adds to the value of 1 charge multiplied by the square root of the total number. This can be either attractive or repulsive, as demonstrated at below [scroll down to paragraph beginning ‘Heuristically, gauge boson (virtual photon) transfer between charges to cause electromagnetic forces, and those gauge bosons don’t discriminate against charges in neutral groups like atoms and neutrons. …’].

The ratio of the random sum to the straight line sum is the square root of the number of charges in the universe. So the relationship between gravity and electromagnetism is established.

Note the recent paper at arxiv by Dr

Mario Rabinowitz which discredits the notion that gravity is normal quantim field theory: http://arxiv.org/abs/physics/0601218: "A Theory of Quantum Gravity may not be possible because Quantum Mechanics violates the Equivalence Principle. Quantum mechanics clearly violates the weak equivalence principle (WEP). This implies that quantum mechanics also violates the strong equivalence principle (SEP), as shown in this paper. Therefore a theory of quantum gravity may not be possible unless it is not based upon the equivalence principle, or if quantum mechanics can change its mass dependence. Neither of these possibilities seem likely at the present time. Examination of QM in n-space, as well as relativistic QM equations does not change this conclusion. "

So the graviton concept is a fraud even in 11 dimensional supergravity which is the limit to M-theory linking strings to quantum gravitation! Spin-2 gravitons don't exist. All you have is two routes by which electromagnetism can operate, based on analysis of Catt's "everything is a capacitor" concept.

The weak route is always attractive but is less by about 10^40 than the strong route of summation which is can be attractive or repulsive.

See the abstract-level unification of general relativity and electromagnetism force by Danny Ross Lunsford at

http://cdsweb.cern.ch/search.py?recid=688763&ln=en. Lunsford's unification is not explained in causal terms in that paper, but the implication is clear: there is no quantum gravity. The unification requires 3 extra dimensions which Lunsford attributes to "coordinatized matter".

Physically what is occurring is this: Einstein's general relativity fails to discriminate between spacetime scales which are expanding (time, etc) and contractable dimensions describing matter (which are contracted by gravity fields, for example GR shows the earth's radius is contracted by 1.5 mm by gravity, like an all round pressure acting not on the surface of the earth but directly on the subatomic matter throughout the volume of the earth ).

By lumping all the expanding dimensions together as time in the metric, Einstein gets the right answers. But really the expansion in spacetime occurs in three dimensions (x, y, z, with in each case time being the dimension divided by velocity c), while the contraction due to fields occurs in three overlapping dimensions. These are not the same mathematical dimensions, because one set is expanding at light speed and the other is contractable!

So physically there are three expanding spacetime dimensions describing time and three contractable dimensions describing matter. Saying there is one expanding time dimension ignores spacetime, which shows that any distance can be expressed as a time past. So the correct symmetry orthagonal group is as Lunsford says SO(3,3), a total of 6 dimensions divided equally into two sets of three each.

All the speculation about 10 dimensional superstrings and 11 dimensional supergravity is pure trash, with no mechanism, prediction or anything:

Catt’s co-author Walton emailed me and cc’d Catt in 2001 that a TEM wave is
not a good name for the Heaviside slab of electromagnetic energy, because
nothing need have a periodic "wave": the energy can just flow at 10 volts in
a slab without waving. So basically you are replacing the electron not just
with a TEM wave but by a non-waving energy block. The de Broglie frequency
of an electron is zero (ie it is not a wave at all) if its propagation
velocity is zero. In order to reconcile the Heaviside energy current with
an electron's known properties, the propagation of the electron (at less
than c) occurs in a direction orthagonal to the energy current. Bu
pythagoras, the velocity sharing between propagation speed v and energy
current speed x is then (v^2) + (x^2) = (c^2), so the energy current goes at
light speed when v=0, but is generally v/c = [1 - (v^2)/(c^2)]^(1/2) which
is the Lorentz contraction factor. Since all time is measured by velocities
(pendulum, clock motor, electron oscillation, etc), this is the
time-dilation law and by spacetime x = ct and x' = vt, we get the length
contraction in the propagation direction.

> > The Standard Model, which predicts all decay rates of elementary
particles
> > very accurately (not nuclei) is composed of the symmetry groups SU(3) x
> > SU(2) x U(1).
> >
> > They are Yang-Mills theories. They describe spin, charge, etc., but NOT
> > MASS. This is why Ivor Catt's "trapped TEM wave" model for the electron
> > is
> > COMPATIBLE with the Standard Model. The mass is added in by the vacuum
> > field, not by the actual particles. (But ... throw Catt a lifeline and
he
> > automatically rejects it, so I've given up trying to explain anything to
> > him. He just doesn't want to know.)
> >
> > In addition, only the electromagnetism unit U(1) is a renormalisable
> > quantum
> > field theory (so by fiddling it so that the force coupling strength from
> > the
> > exchange of photons gives Coulomb's law, it then predicts other things
> > accurately, like the Lamb frequency shift and magnetic moment measured
for
> > an electron).
> >
> > The SU(3) and SU(2) symmetries, for strong and weak nuclear forces,
> > respectively, describe the force-carrying mediators to be short ranged
> > (why
> > is why they only participate in nuclear sized interactions, and we only
> > see
> > electromagnetism and gravity at the macroscopic scale).
> >
> > The short range is caused by the force mediators having mass. For a
> > proton,
> > only 11 MeV of the 938 MeV mass is due to quarks. Hence the force
> > mediators
> > and the effect of the polarised vacuum multiplies the mass by about a
> > factor
> > of 85. The actual quarks themselves have ZERO mass, the entire mass
being
> > due to the vacuum field which "mirs" them and creates inertia.

Gerald ‘t Hooft and Martinus Veltman in 1970 showed Yang-Mills theory is the only way to unify Maxwell's equations and QED, giving the U(1) group of the Standard Model.

In electromagnetism, the spin-1 photon interacts by changing the quantum state of the matter emitting or receiving it, via inducing a rotation in a Lie group symmetry. The equivalent theories for weak and strong interactions are respectively isospin rotation symmetry SU(2) and color rotation symmetry SU(3).

Because the gauge bosons of SU(2), SU(3) have limited range and therefore are massive, the field obviously carries most of the mass; so the field is there not just a small perturbation as it is in U(1).

Eg, a proton has a rest mass of 938 MeV but the three real quarks in it only contribute 11 MeV, so the field contributes 98.8 % of the mass. In QED, the field contributes only about 0.116 % of the magnetic moment of an electron.

I understand the detailed calculations involving renormalization; in the usual treatment of the problem there's an infinite shielding of a charge by vacuum polarisation at low energy, unless a limit or cutoff is imposed to make the charge equal the observed value. This process can be viewed as a ‘fiddle’ unless you can justify exactly why the vacuum polarisation is limited to the required value.

Hence Dirac’s reservations (and Feynman’s, too). On the other hand, just by one 'fiddle', it gives a large number of different, independent predictions like Lamb frequency shift, anomalous magnetic moment of electron etc.

The equation is simple (page 70 of

http://arxiv.org/abs/hep-th/0510040), for modeling one corrective Feynman diagram interaction. I've read Peter say (I think) that the other couplings which are progressively smaller (a convergent series of terms) for QED, instead become a divergent series for field theories with heavy mediators. The mass increase due to the field mass-energy is by a factor of 85 for the quark-gluon fields of a proton, compared to only a factor of 1.00116 for virtual charges interacting with an electron.

So there are many areas where the calculations of the Standard Model could be further studied, but string theory doesn't even begin to address them. Other examples: the masses and the electroweak symmetry breaking in the Standard Model are barely described by the existing speculative (largely non-predictive) Higgs mechanism.

Gravity, the ONE force hasn't even entered the Standard Model, is being tackled by string theorists, who - like babies - always want to try to run before learning to walk. Strings can't predict any features of gravity that can be compared to experiment. Instead, string theory is hyped as being perfectly compatible with non-observed, speculative gravitons, superpartners, etc. It doesn't even scientifically 'predict' the unnecessary gravitons or superpartners, because it can't be formulated in a quantitative way. Dirac and Pauli had predictions that were scientific, not stringy.

Dirac made exact predictions about antimatter. He predicted the rest mass-energy of a positron and the magnetic moment, so that quantitative comparisons could be done. There are no quantitative predictions at potentially testable energies coming out of string theory.

Theories that ‘predict’ unification at 1016 times the maximum energy you can achieve in an accelerator are not science.

I just love the fact string theory is totally compatible with special relativity, the one theory which has never produced a unique prediction that hasn't already been made by Lorentz, et al. based on physical local contraction of instruments moving in a fabric spacetime.

It really fits with the overall objective of string theory: the enforcement of genuine group-think by a group of bitter, mainstream losers.

Introduction to quantum field theory (the Standard Model) and General Relativity.

Mainstream ten or eleven dimensional ‘string theory’ (which makes no testable predictions) is being hailed as consistent with special relativity. Do mainstream mathematicians want to maintain contact with physical reality, or have they accidentally gone wrong due to ‘group-think’? What will it take to get anybody interested in the testable unified quantum field theory in this paper?

Peer-review is a sensible idea if you are working in a field where you have enough GENUINE peers that there is a chance of interest and constructive criticism. However string theorists have proved controlling, biased and bigoted group-think dominated politicians who are not simply ‘not interested in alternatives’ but take pride in sneering at things they don’t have time to read!

This

inward force is a bit like air pressure, in the sense you don’t ‘feel’ it. Air pressure is 10 metric tons per square metre or 14.7 pounds/square inch. Since the human body area is stated as 2 square metres, the air force is 2 x 10 = 20 metric tons or 9.8 x 20,000 = 196,000 Newtons. The nuclear 'implosion' bomb works the same way as the big bang: TNT creates equal inward and outward force result, so the plutonium in the middle is compressed by the TNT explosion until its surface area shrinks, reducing neutron loss and causing a reaction.

So yes, unless the shell you refer to below is has such great strength that it could magically resist the 6.0266 x 1042 Newtons inward force, it would be accelerated inwards (collapse). The object in the middle would however be exchanging gauge bosons with the shell, which in turn would be exchanging them with the surrounding universe. You have to deal with it step by step.

Why should Heaviside energy, trapped by gravity (the only force which can bend light and which is generated by energy as well as mass according to general relativity), not have a black hole shielding area? Your question seems to falsely assume Planck dimensions obtained from fiddling about with assumed 'fundamental constants' using dimensional analysis with no empirical support or proof, are somehow real science, when there have no evidence whatsoever to justify any of them. I live in a world of science quite different, where every statement needs evidence or proof, not the authority of Planck or someone to substantiate the guess. Authority is not a safe guide. Black hole electron core is proved by Catt's observation that static electric charge is Heaviside electromagnetic energy and thus has a spin at light speed (see quote below). Planck dimensions are obtained from dimensional analysis, and are assumed to apply to strings by string theorists who don't have contact with the real world. Gravity is the shielding of gauge boson pressure. The shielding area of a fundamental particle is the area of a black hole of similar mass, which for electrons etc is far smaller than the Planck area. So even for the planet earth, most of the gravity causing gauge boson radiation is not stopped.

In quantum gravity, the big error in physics is that Edwin Hubble in 1929 divided the Doppler shift determined recession speeds by the apparent distances to get his constant, v/R = H. In fact, the distances increase while the light and gravity effect are actually coming back to us. What he should have done is to represent it as a variation in speed with time past. The whole point about space-time is precisely that there is equivalence between seeing thing at larger distances, and seeing things further back in time. You cannot simply describe the Hubble effect as a variation in speed with distance, because time past is involved! Whereas H has units s-1 (1/age of universe), the directly observed Hubble ratio is equal to v/t = RH/(1/H) = RH2 (and therefore has units of ms-2, acceleration). In the big bang, the recession velocities from here outward vary from v = 0 towards v = c, and the corresponding times after the big bang vary from 15,000 million years (t = 1/H) towards zero time. Hence, the apparent acceleration as seen in space-time is

Although a small acceleration, a large mass of the universe is involved so the outward force (F = ma) is very large. The 3rd law of motion implies equal inward force like an implosion, which in LeSage gravity gives the right value for G, disproving the ‘critical density’ formula of general relativity by ½ e3 = 10 times. This disproves most speculative ‘dark matter’. Since gravity is the inward push caused by the graviton/Higgs field flowing around the moving fundamental particles to fill in the void left in their wake, there will only be a gravitational ‘pull’ (push) where there is a surrounding expansion. Where there is no surrounding expansion there is no gravitational retardation to slow matter down. This is in agreement with observations that there is no slowing down (a fictitious acceleration is usually postulated to explain the lack of slowing down of supernovae).

The density correction factor (e3 = 20), explained: for mass continuity of any expanding gas or explosion debris in hydrodynamics, dr/dt = -Ñ.(rv) = -3rH. Inserting the Hubble expansion rate v = HR and solving,r = rlocal e3 (early visible universe has higher density). The reason for multiplying the local measured density of the universe up by a factor of about 20 (the number e3 , the cube of the base of natural logarithms) is because it is the denser, more distant universe which contains most of the mass which is producing most of the inward pressure. Because we see further back in time with increasing distance, we see a more compressed age of the universe. Gravitational push comes to us at light speed, with the same velocity as the visible light that shows the stars. Therefore we have to take account of the higher density at earlier times. What counts is what we see, the spacetime in which distance is directly linked to time past, not the simplistic picture of a universe at constant density, because we can never see or experience gravity from such a thing due to the finite speed of light. The mass continuity equation dr/dt = -Ñ.(rv) is simple hydrodynamics based on Green’s theorem and allows the Hubble law (v = HR) to be inserted and solved. An earlier method of calculation for this the notes of CERN preprint EXT-2004-007, is to set up a formula for the density at any particular time past, so as to calculate red-shifted contributions to inward spacetime fabric pressure from a series of shells surrounding the observer. This is the same as the result r = rlocal e3.

I don’t have a model, just the facts which are based on Catt’s experiments. Catt puts Heaviside energy into a conductor, which charges up with energy at light speed, which has no mechanism to slow down. The nature of charge in a spinning fundamental particle is therefore likely to be energy. This associates with vacuum particles, Higgs bosons, which give rise to the mass. All this is Standard Model stuff. All I'm doing is pushing the causal side of the Standard Model to the point where it achieves success. Gauge bosons are radiated continuously from spinning charge, and carry momentum. The momentum of light has been measured, it is fact. It is being radiated by all charges everywhere, not just at great distances.

If volume of universe is (4/3)

p R3 and expansion is R=ct, then density varies as t-3, and for a star at distance r, absolute time after big bang will be t – r/c (where t is our local time after big bang, about 15 Gyr), so the density of the universe at its absolute age corresponding to visible distance r, divided by the density locally at 15 Gyr, is [(t – r/c)/t]-3 = (1 – rc-1t-1)-3, which is the factor needed to multiply up the nearby density to give that at earlier times corresponding to large visible distances. This formula gives infinite density at the finite radius of the universe, whereas an infinite density only exists in a singularity; this requires some dismissal of special relativity, either by saying that the universe underwent a faster than c expansion at early times (Guth’s special relativity violating inflationary universe), or else by saying that the red-shifted radiation coming to us is actually travelling very slowly (this is more heretical than Guth’s conjecture). Setting this equal to density factor e3 we see that 1 - r/(ct) = 1/e. Hence r = 0.632ct. This means that the effective distance at which the gravity mechanism source lies is at 63.2 % of the radius of the universe, R = ct. At that distance, the density of the universe is 20 times the local density where we are, at a time of 15,000,000,000 years after the big bang. Therefore, the effective average distance of the gravity source is 9,500,000,000 light years away, or 5,500,000,000 years after the big bang.

Light has momentum and exerts pressure, delivering energy. The pressure towards us due to the gauge bosons (force-causing radiation of quantum field theory), produces the contraction effect of general relativity and also gravity by pushing us from all directions equally, except where reduced by the shielding of the planet earth below us. Hence, the overriding push is that coming downwards from the stars above us, which is greater than the shielded effect coming up through the earth. This is the mechanism of the acceleration due to gravity. We are seeing the past with distance in the big bang! Gravity consists of gauge boson radiation, coming from the past just like light itself. The big bang causes outward acceleration in observable spacetime (variation in speed from 0 toward c per variation of times past from 0 toward 15,000,000,000 years), hence force by Newton’s empirical 2nd law, F = ma. The 3rd empirical law of Newton says there’s equal inward force, carried by gauge bosons that get shielded by mass, proving gravity to within 1.65%.

The proofs shows that the local density (i.e., density at 15,000,000,000 years after origin) of the universe is:

r(local) = 3H2/(4pe3 G). The mechanism also shows that because gravity is an inward push as reaction to surrounding expansion, there is asymmetry at great distances and thus no gravitational retardation of the expansion (predicted via October 1996 issue of Electronics World, before experimental confirmation by Perlmutter using automated CCD observations of distant supernovae). Because there is no slowing down due to the mechanism, the application of general relativity to cosmology is modified slightly, and the radius of the universe is R = ct = c/H, where H is Hubble constant. The observable recession velocity in spacetime is a = dv/dt = c/t = Hc.

Hence, outward force of big bang: F = Ma = [(4/3)

pR3r(local) ].[Hc] = c4 / (e3 G) = 6.0266 x 1042 Newtons. Notice the permitted high accuracy, since the force is simply F = c4 / (e3 G), where c, e (a mathematical constant) and G are all well known. (The density and Hubble constant have cancelled out.) When you put this result for outward force into the geometry in the lower illustration above and allow for the effective outward force being e3 times stronger than the actual force (on account of the higher density of the earlier universe, since we are seeing – and being affected by – radiation from the past, see calculations later on), you get F = Gm2 /r2 Newtons, if the shielding area is taken as the black hole area (radius 2Gm/c2). Why m2 ? Because all mass is created by the same fundamental particles, the ‘Higgs bosons’ of the standard model, which are the building blocks of all mass, inertial and gravitational! This is evidence that mass is quantized, hence a theory of quantum gravitation.

The heuristic explanation of this 137 anomaly is just the shielding factor by the polarised vacuum:

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

Heisenberg's uncertainty says

pd = h/(2.Pi)

where p is uncertainty in momentum, d is uncertainty in distance.
This comes from his imaginary gamma ray microscope, and is usually written as a minimum (instead of with "=" as above), since there will be other sources of uncertainty in the measurement process.

This result is used to show that a 80 GeV energy W or Z gauge boson will have a range of 10^-17 m. So it's OK.

Now, E = Fd implies

d = hc/(2.Pi.E) = hc/(2.Pi.Fd)

Hence

F = hc/(2.Pi.d^2)

This force is 137.036 times higher than Coulomb's law for unit fundamental charges.
Notice that in the last sentence I've suddenly gone from thinking of d as an uncertainty in distance, to thinking of it as actual distance between two charges; but the gauge boson has to go that distance to cause the force anyway.
Clearly what's physically happening is that the true force is 137.036 times Coulomb's law, so the real charge is 137.036. This is reduced by the correction factor 1/137.036 because most of the charge is screened out by polarised charges in the vacuum around the electron core:

"... we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum ... amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies)." - arxiv hep-th/0510040, p 71.

The unified Standard Model force is F = hc/(2.Pi.d^2)

That's the superforce at very high energies, in nuclear physics. At lower energies it is shielded by the factor 137.036 for photon gauge bosons in electromagnetism, or by exp(-d/x) for vacuum attenuation by short-ranged nuclear particles, where x = hc/(2.Pi.E)

This is dealt with at

http://einstein157.tripod.com/ and the other sites. All the detailed calculations of the Standard Model are really modelling are the vacuum processes for different types of virtual particles and gauge bosons. The whole mainstream way of thinking about the Standard Model is related to energy. What is really happening is that at higher energies you knock particles together harder, so their protective shield of polarised vacuum particles gets partially breached, and you can experience a stronger force mediated by different particles!

Maxwell supposed that the variation in voltage (hence electric field
strength) in a capacitor plate causes an ethereal "displacement current".
Mathematically Maxwell's trick works, since you put the "displacement
current" law together with Faraday's law of induction and the solution is
Maxwell's light model, predicting the correct speed of light. However, this
changes when you realise that displacement current is itself really
electromagnetic radiation, and acts at 90 degrees to the direction light
propagates in Maxwell's model. Maxwell's model is entirely
self-contradictory, and so his unification of electricity and magnetism
falls apart.

Maxwell's unification is wrong, because the reality is that the
"displacement current" effects result from electromagnetic radiation emitted
transversely when the current varies with time (hence when charges
accelerate) in response to the time-varying voltage. This completely alters
the picture we have of what light is!

(2) True model to replace fully "displacement current": Voltage varying with
time accelerates charges in the conductor, which as a result emit radiation
transversely.

I gave logical arguments for this kind of thing (without the full details I
have recently discovered) in my letter published in the March 2005 issue of
Electronics World. Notice that Catt uses a completely false picture of
electricity with discontinuities (vertically abrupt rises in voltage at the
front of a logic pulse) which don't exist in the real world, so he does not
bother to deal with the facts and missed the mechanism. However Catt is
right for arguing that the flaw in Maxwell's classical electromagnetism
stems to the ignorance Maxwell had of the way current must spread along the
plates at light speed.

Physically, Coulomb's law and Gauss' law come from the SU(2)xU(1) portion of the Standard Model, the break down of electroweak theory by the way the vacuum rapidly attenuates the gauge bosons of weak forces (W and Z) over short ranges at low energy, but merely shields the electromagnetic force gauge boson (photon) by a factor of 1/137 at low energy, and

"... we find that the electromagnetic coupling grows with energy. This can be explained heuristically by remembering that the effect of the polarization of the vacuum ... amounts to the creation of a plethora of electron-positron pairs around the location of the charge. These virtual pairs behave as dipoles that, as in a dielectric medium, tend to screen this charge, decreasing its value at long distances (i.e. lower energies)." - arxiv hep-th/0510040, p 71

You have to include the Standard Model to allow for what happens in particle accelerators when particles are fired together at high energy. The physical model above does give a correct interpretation of QFT and is also used in many good books (including Penrose's Road to Reality). However as stated

[13], the vacuum particles look different to observers in different states of motion, violating the postulate of Special/Restricted relativity (which is wrong anyway for the twins paradox, i.e., for ignoring all accelerating motions and spacetime curvature). This is why it is a bit heretical. Nevertheless it is confirmed by Koltick's experiments in 1997, published in PRL.

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

Koltick found a 7% increase in the strength of Coulomb's/Gauss' force field law when hitting colliding electrons at an energy of 80 GeV or so. The coupling constant for electromagnetism is 1/137 at low energies but was found to be 1/128.5 at 80 GeV or so. This rise is due to the polarised vacuum being broken through. We have to understand Maxwell's equations in terms of the gauge boson exchange process for causing forces and the polarised vacuum shielding process for unifying forces into a unified force at very high energy.

As proved, the physical nature of "displacement current" is gauge boson/radio wave energy exchange in the Catt anomaly,

[16]. Catt has no idea what the Standard Model or general relativity are about, but that is what his work can be used to understand, by getting to grips with what "displacement current" really is (radio) as distinct from the fantasy Maxwell developed in which "displacement current" is not radio but is involved in radio together with Faraday's law, both acting at 90 degrees to the direction of propagation of radio. Maxwell's light is a complete fantasy that has been justified by a falsified history Maxwell and Hertz invented.

When Catt's TEM wave is corrected to include the fact that the step has a finite not a zero rise time, there is electromagnetic radiation emission sideways. Each conductor emits an inverted mirror image of the electromagnetic radiation pulse of the other, so the conductors swap energy. This is the true mechanism for the "displacement current" effect in Maxwell's equations. The electromagnetic radiation is not seen at a large distance because when the distance from the transmission line is large compared to the gap between the conductors, there is perfect interference, so no energy is lost by radiation externally from the TL. Also, the electromagnetic radiation or "displacement current" is the mechanism of forces in electromagnetism. It shows that Maxwell's theory of light is misplaced, because Maxwell has light propagating in a direction at 90 degrees to "displacement current". Since light is "displacement current" it goes in the same direction, not at 90 degrees to it

The minimal SUSY Standard Model shows electromagnetic force coupling increasing from alpha of 1/137 to alpha of 1/25 at 10^16 GeV, and the strong force falling from 1 to 1/25 at the same energy, hence unification.

The reason why the unification superforce strength is not 137 times electromagnetism but only 137/25 or about 5.5 times electromagnetism, is heuristically explicable in terms of potential energy for the various force gauge bosons.

If you have one force (electromagnetism) increase, more energy is carried by virtual photons at the expense of something else, say gluons. So the strong nuclear force will lose strength as the electromagnetic force gains strength. Thus simple conservation of energy will explain and allow predictions to be made on the correct variation of force strengths mediated by different gauge bosons. When you do this properly, you may learn that SUSY just isn't needed or is plain wrong, or else you will get a better grip on what is real and make some testable predictions as a result.

I frankly think there is something wrong with the depiction of the variation of weak force strength with energy shown in Figure 66 of Lisa Randall's "Warped Passages". The weak strength is extremely low (alpha of 10^-10) normally, say for beta decay of a neutron into a proton plus electron and antineutrino. This force coupling factor is given by Pi2hM4/(Tc2m5), where h is Planck’s constant from Planck’s energy equation E = hf, M is the mass of the proton, T is the effective energy release ‘life’ of the radioactive decay (i.e. the familiar half-life multiplied by 1/ln2 = 1.44), c is velocity of light, and m is the mass of an electron.
The diagram seems to indicate that at low energy, the weak force is stronger than electromagnetism, which seems in error. The conventional QFT treatments show that electroweak forces increase as a weak logarithmic function of energy. See

There is a lot of obfuscation introduced by maths even at low levels of physics. Most QED calculations completely cover up the problems between SR and QED, that the virtual particles in the vacuum look different to observers in different motion, etc.

In Coulomb's law, the QED vector boson "photon" exchange force mechanism will be affected by motion, because photon exchanges along the direction of motion will be slowed down. Whether the FitzGerald-Lorentz contraction is physically due to this effect, or to a physical compression/squeeze from other force-carrying radiation of the vacuum, is unspeakable in plain English. The problem is dressed up in fancy maths, so people remain unaware that SR became obsolete with GR covariance in 1915. On the spin foam vacuum of LQG, the vacuum is full of all kinds of real and imaginary particles with various spins, virtual fermions, vector bosons, speculative Higgs field and superpartners.

First of all, take the simple question of how the vacuum allows photons to propagate any distance, but quickly attenuates W an Z bosons. Then you are back to the two equations for a transverse light wave photon, Faraday's law of electric induction and Maxwell's vacuum displacement current in Ampere's law. Maxwell (after discarding two mechanical vacuums as wrong), wrote that the displacement current in the vacuum was down to tiny spinning "elements" of the vacuum (Maxwell, Treatise, Art. 822; based partly on the effect of magnetism on polarised light).

I cannot see how loop quantum gravity can be properly understood unless the vacuum spin network is physically understood with some semiclassical model. People always try to avoid any realistic discussion of spin by claiming that because electron spin is half a unit, the electron would have to spin around twice to look like one revolution. This isn't strange because a Mobius strip with half a turn on the loop has the same property (because both sides are joined, a line drawn around it is twice the length of the circumference). Similarly the role of the Schroedinger/Dirac wave equations is not completely weird because sound waves are described by wave equations while being composed of particles. All you need is a lot of virtual particles in the vacuum interacting with the real particle, and it is jiggled around as if by brownian motion.

It's really sad that virtually nobody is interesting in pursuing this line of research, because everyone is brainwashed by string theory. I don't have the time or resources to do anything, and am not expert in QFT. But I can see why nobody in the mainstream is looking in the right direction, it's simply because fact is stranger than fiction. They're all more at home in the 11th dimension than anywhere else...

Black holes are the spacetime fabric perfect fluid. The lack of viscosity is the lack of continuous drag. You just get a bulk flow of the spacetime fabric around the fundamental particle. The Standard Model says the mass has a physical mechanism: the surrounding Higgs field. When you move a fundamental particle in the Higgs field, and approach light speed, the Higgs field has less and less time to flow out of the way, so it mires the particle more, increasing its mass. You can't move a particle at light speed, because the Higgs field would have ZERO time to flow out of the way (since Higgs bosons are limited to light speed themselves), so inertial mass would be infinite. The increase in mass due to a surrounding fluid is known in hydrodynamics:

‘In this chapter it is proposed to study the very interesting dynamical problem furnished by the motion of one or more solids in a frictionless liquid. The development of this subject is due mainly to Thomson and Tait [Natural Philosophy, Art. 320] and to Kirchhoff [‘Ueber die Bewegung eines Rotationskörpers in einer Flüssigkeit’, Crelle, lxxi. 237 (1869); Mechanik, c. xix]. … it appeared that the whole effect of the fluid might be represented by an addition to the inertia of the solid. The same result will be found to hold in general, provided we use the term ‘inertia’ in a somewhat extended sense.’ – Sir Horace Lamb, Hydrodynamics, Cambridge University Press, 6th ed., 1932, p. 160. (Hence, the gauge boson radiation of the gravitational field causes inertia. This is also explored in the works of Drs Rueda and Haisch: see http://arxiv.org/abs/physics/9802031 http://arxiv.org/abs/gr-qc/0209016 , http://www.calphysics.org/articles/newscientist.html and http://www.eurekalert.org/pub_releases/2005-08/ns-ijv081005.php .)

The black holes of the spacetime fabric are the virtual fermions, etc., in the vacuum, which are different from the real electron, because the real electron is surrounded by a polarised layer of vacuum charges and Higgs field, which gives the mass.

The field which is responsible for associating the Higgs field particles with the mass can be inside or outside the polarised veil of dielectric, right? If the Higgs field particles are inside the polarised veil, the force between the fundamental particle and the mass creating field particle is very strong, say 137 times Coulomb's law. On the other hand, if the mass causing Higgs field particles are outside the polarised veil, the force is 137 times less than the strong force. This implies how the 137 factor gets in to the distribution of masses of leptons and hadrons.

Geometry of magnetic moment correction for electron: reason for number 2

Magnetic moment of electron= Dirac factor + 1st virtual particle coupling correction term = 1 + 1/(2.Pi.137.0...) = 1.00116 Bohr magnetons to 6 significant figures (more coupling terms are needed for greater accuracy). The 137.0... number is usually signified by 1/alpha, but it is clearer to use the number than to write 1 + alpha/(2.Pi).

The 1 is the magnetic contribution from the core of the electron. The second term, alpha/(2.Pi) or 1/(2.Pi.137), is the contribution from a virtual electron which is associated with the real electron core via shielded electric force. The charge of the core is 137e, the shielding due to the veil of polarised vacuum virtual charges around the core is 1/137, so the observed charge outside the veil is just e.

The core magnetism of 1 Bohr magneton predicted by Dirac's equation is too low. The true factor is nearer 1.00116, and the additional 0.116% is due to the vacuum virtual particles.

In other words, the vacuum reduces the electron's electric field, but increases its magnetic field! The reason for the increase in the magnetic field by the addition of alpha/(2.Pi) = 1/(2.Pi.137.0...) is simply that a virtual particle in the vacuum pairs up with the real particle via the electric field. The contribution of the second particle is smaller than 1 Bohr magneton by three factors, 2, Pi, and 137.0... Why? Well, heuristic reasoning suggests that the second particle is outside the polarised shield, and is thus subject to a shielding of 1/137.

The magnetic field from the real electron core which is transverse to the radial direction (i.e., think about the magnetic field lines over earth's equator, which run at 90 degrees to the radial direction) will be shielded, by the 137 factor. But the magnetic field that is parallel to the radial direction (i.e., the magnetic field lines emerging from earth's poles) are completely unshielded.

Whereas an electric field gets shielded where it is parallel to another electric field (the polarised vacuum field arrow points outward because virtual positrons are closer to the negative core than virtual electrons, so this outward arrow opposes the inward arrow of electric field towards the real electron core, causing attenuation), steady state magnetic fields only interact with steady state electric fields where specified by Ampere's law, which is half of one of Maxwell's four equations.

Ampere's law states that a curling magnetic field causes an electric current, just like an electric field does. Normally to get an electric current you need an electric potential difference between the two ends of a conductor, which causes electrons to drift. But a curling electric field around the conductor does exactly the same job. Therefore, a curling magnetic field around a conductor is quite indistinguishable from an electric field which varies along a conductor. You might say no, because the two are different, but you'd be wrong. If you have an electric field variation, then the current will (by conventional theory) cause a curling magnetic field around the conductor.

At the end of the day, the two situations are identical. Moreover, conventional electric theory has some serious issues with it, since Maxwell's equations assume instantaneous action at a distance (such as a whole capacitor plate being charged up simultaneously), which have been experimentally and theoretically disproved, despite the suppression of this fact as 'heresy'.

Maxwell's equations have other issues as well, for example Coulomb's law which is expressed in Maxwell's equation as the electric field from a charge (Gauss' law), is known to be wrong at high energies. Quantum field theory and experiments confirming it published by Koltick in PRL in 1997 shows that electric forces are 7% higher at 80 GeV than at low energies. This is because the polarised vacuum is like a sponge foam covering on an iron cannon ball. If you knock such sponge foam covered balls together very gently, you don't get a metallic clang or anything impressive. But if you fire them together very hard, the sponge foam covering is breached by the force of the impact, and you experience the effects of the strong cores to a greater degree!

The polarised vacuum veil around the real electron core behaves a bit like the shield of foam rubber around a steel ball, protecting it from strong interactions if the impacts are low energy, but breaking down in very high-energy impacts.

The Schwinger correction term, 1/(2.Pi.137) contains 137 because of the shielding by the polarised vacuum veil.

The coupling is physically interpreted as a Pauli-exclusion principle type magnetic pairing of the real electron core with one virtual positron just outside the polarised veil. Because the spins are aligned to some extent in this process, the magnetic field which is of importance between the real electron core and the virtual electron is the transverse magnetic field, which is (unlike the polar magnetic field) shielded by the 137 factor like the electric field.

So that explains why the magnetic contribution from the virtual electron is 137 times weaker than that from the real electron core: because the transverse magnetic field from the real electron core is reduced by 137 times, and that is what causes the Pauli exclusion principle spin alignment. The two other reduction factors are 2 and Pi. These are there simply because each of the two particles is a spinning loop and has its equator on the same plane to the other. The amount of field each particle sees of the other is 1/Pi of the total, because a loop has a circumference of Pi times the diameter, and only the diameter is seen edge-on, which means that only 1/Pi of the total is seen edge on. Because the same occurs for each particle, each of the two particles (the one real particle and the virtual particle), the correct reduction factor is twice this. Obviously, this is heuristic, and by itself doesn't prove anything. It is only when you add this explanation to the prediction of meson and baryon masses by the same mechanism of 137, and the force strengths derivation, that it starts to become more convincing. Obviously, it needs further work to see how much it says about further coupling corrections, but its advantage is that it is a discrete picture so you don't have to artifically and arbitrarily impose cutoffs to get rid of infinities, like those of existing (continuous integral, not discrete) QFT

One think more I want to say after the latest post (a few back actually) here on deriving the strong nuclear force as 137 times Coulomb's law for low energies. The Standard Model does not indicate perfect force unification at high energy unless there is supersymmetry (SUSY), which requires superpartners which have never been observed, and whose energy is not predictable.

The minimal theory of supersymmetry predicts that the strong, weak and electromagnetic forces unify at 10^16 GeV. I've mentioned already that Koltick's experiments in 1997 were at 80 GeV, and that was pushing it. There is no way you can ever test a SUSY unification theory by firing particles together on this planet, since the planet isn't big enough to house or power such a massive accelerator. So you might as well be talking about UFOs as SUSY, because neither are observable scientifically in any conceivable future scenario of real science.

So let's forget SUSY and just think about the Standard Model as it stands. This shows that the strong, weak, and electromagnetic forces become almost (but not quite) unified at around 10^14 GeV, with an interaction strength around alpha of 0.02, but that electromagnetism continues to rise at higher energy, becoming 0.033 at 10^20 GeV, for example. Basically, the Standard Model without SUSY predicts that electromagnetism continues to rise as a weak (logarithmic type) function of energy, while the strong nuclear force falls. Potential energy conservation could well explain why the strong nuclear force must fall when the electromagnetic force rises. The fundamental force is not the same thing as the particle kinetic energy, remember. Normally you would expect the fundamental force to be completely distinct from the particle energy, but there are changes because the polarised vacuum veil around the core is progressively breached in higher energy impacts.

The muon is 1.5 units on this scale but this is heuristically explained by a coupling of the core (mass 1) with a virtual particle, just as the electron couples increasing its magnetic moment to about 1 + 1/(2

p 137). The mass increase of a muon is 1 + 1/2 because Pi is due to spin and the 137 shielding factor doesn’t apply to bare particles cores in proximity, as it is due to the polarised vacuum veil at longer ranges. This is why unification of forces is approached with higher energy interactions, which penetrate the veil.

The mechanism is that the 137 number is the ratio between the strong nuclear and the electromagnetic force strength, which is a unification arising due to the polarisation of the vacuum around a fundamental particle core. Therefore, the Coulomb force near the core of the electron is the same as the strong nuclear force (137 times the observed Coulomb force), but 99.27% of the core force is shielded by the veil of polarised vacuum surrounding the core. Therefore, if the mass-causing Higgs bosons of the vacuum are outside the polarised veil, they couple weakly, giving a mass 137 times smaller (electron mass), and if they are inside the veil of polarised vacuum, they couple 137 times more strongly, giving higher mass particles like muons, quarks, etc (depending on the discrete number of Higgs bosons coupling to the particle core: the for all directly observable elementary particle masses (quarks are not directly observable, only as mesons and baryons) is (0.511 Mev).(137/2)n(N + 1) = 35n(N + 1) Mev

This idea predicts that a particle core with n fundamental particles (n=1 for leptons, n = 2 for mesons, and obviously n=3 for baryons) coupling to N virtual vacuum particles (N is an integer) will have an associative inertial mass of Higgs bosons of:

(0.511 Mev).(137/2)n(N + 1) = 35n(N + 1) Mev,

where 0.511 Mev is the electron mass. Thus we get everything from this one mass plus integers 1,2,3 etc, with a mechanism. We test this below against data for mass of muon and all ‘long-lived’ hadrons.

The problem is that people are used to looking to abstruse theory due to the success of QFT in some areas, and looking at the data is out of fashion. If you look at history of chemistry there were particle masses of atoms and it took school teachers like Dalton and a Russian to work out periodicity, because the bigwigs were obsessed with vortex atom maths, the ‘string theory’ of that age. Eventually, the obscure school teachers won out over the mathematicians, because the vortex atom (or string theory equivalent) did nothing, but empirical analysis did stuff. It was eventually explained theoretically!

There was a crude empirical equation for lepton masses by A.O. Barut, PRL, v. 42 (1979), p. 1251. We can extend the basic idea to hadrons. The muon is 1.5 units on this scale but this is heuristically explained by a coupling of the core (mass 1) with a virtual particle, just as the electron couples increasing its magnetic moment to about 1 + 1/(2.Pi.137). The mass increase of a muon is 1 + 1/2 because Pi is due to spin and the 137 shielding factor doesn’t apply to bare particles cores in proximity, as it is due to the polarised vacuum veil at longer ranges. This is why unification of forces is approached with higher energy interactions, which penetrate the veil. This idea predicts that a particle core with n fundamental particles (n=1 for leptons, n = 2 for mesons, and obviously n=3 for baryons) coupling to N virtual vacuum particles (N is an integer) will have an associative inertial mass of Higgs bosons of: (0.511 Mev).(137)n(N + 1)/2 = 35n(N +1) Mev.

Accuracy tested against data for mass of muon and all ‘long-lived’ hadrons:

The mechanism is that the charge of the bare electron core is 137 times the Coulomb (polarisation-shielded) value, so vacuum interactions of bare cores of fundamental particles attract 137 times as much virtual mass from the vacuum, increasing the inertia similarly. It is absurd that these close fits, with only a few percent deviation, are random chance, and this can be shown by statistical testing using random numbers as the null hypothesis. So there is empirical evidence that this heuristic interpretation is on the right lines, whereas the ‘renormalisation’ is bogus:

The masses above for all the major long-lived hadrons are in units of (electron mass)x137. A statistical Chi-squared correlation test against random numbers as the null hypothesis, indeed gives positive statistical evidence that they are close to integers. The mechanism is that the charge of the bare electron core is 137 times the Coulomb (polarisation-shielded) value, so vacuum interactions of bare cores of fundamental particles attract 137 times as much virtual mass from the vacuum, increasing the inertia that much too. Leptons and nucleons are the things most people focus on, and are not integers when the masses are in units of (electron mass)x137. The muon is about 1.5 units on this scale but this can be explained by a coupling of the core (mass 1) with a virtual particle, just as the electron couples increasing its magnetic moment to 1 + 1/(2.Pi.137). The mass increase of the muon is 1 + 1/2 because the Pi is due to spin and the 137 shielding factor doesn’t apply to bare cores in proximity.

To recap, the big bang has an outward force of 6.0266 x 1042 Newtons (by Newton’s 2nd law) that results in an equal inward force (by Newton’s 3rd law) which causes gravity as a shielded inward force, Higgs field or rather gauge boson pressure. This is based on standard heuristic quantum field theory (for the Feynman path integral approach), where forces are due not to empirical equations but to the exchange of gauge boson radiation. Where partially shielded by mass, the inward pressure causes gravity. Apples are pushed downwards towards the earth, a shield: ‘… the source of the gravitational field [gauge boson radiation] can be taken to be a perfect fluid…. A fluid is a continuum that ‘flows’... A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp. 89-90.

LeSage in 1748 argued that there is some kind of pressure in space, and that masses shield one another from the space pressure, thus being pushed together by the unshielded space pressure on the opposite side. Feynman discussed LeSage in November 1964 lectures Character of Physical Law, and elsewhere explained that the major advance of general relativity, the contraction term, shortens the radius of every mass, like the effect of a pressure mechanism for gravity! He does not derive the equation, but we have done so above.

The magnetic force in electromagnetism results from the spin of vacuum particles, and this seems to be one thing about Maxwell’s spacetime fabric that was possibly not entirely wrong:

Maxwell’s 1873 Treatise on Electricity and Magnetism, Articles 822-3: ‘The ... action of magnetism on polarised light [discovered by Faraday not Maxwell] leads ... to the conclusion that in a medium ... is something belonging to the mathematical class as an angular velocity ... This ... cannot be that of any portion of the medium of sensible dimensions rotating as a whole. We must therefore conceive the rotation to be that of very small portions of the medium, each rotating on its own axis [spin] ... The displacements of the medium, during the propagation of light, will produce a disturbance of the vortices ... We shall therefore assume that the variation of vortices caused by the displacement of the medium is subject to the same conditions which Helmholtz, in his great memoir on Vortex-motion [of 1858; sadly Lord Kelvin in 1867 without a fig leaf of empirical evidence falsely applied this vortex theory to atoms in his paper ‘On Vortex Atoms’, Phil. Mag., v4, creating a mathematical cult of vortex atoms just like the mathematical cult of string theory now; it created a vast amount of prejudice against ‘mere’ experimental evidence of radioactivity and chemistry that Rutherford and Bohr fought], has shewn to regulate the variation of the vortices [spin] of a perfect fluid.’

‘… the source of the gravitational field can be taken to be a perfect fluid…. A fluid is a continuum that ‘flows’... A perfect fluid is defined as one in which all antislipping forces are zero, and the only force between neighboring fluid elements is pressure.’ – Professor Bernard Schutz, General Relativity, Cambridge University Press, 1986, pp. 89-90.

‘In this chapter it is proposed to study the very interesting dynamical problem furnished by the motion of one or more solids in a frictionless liquid. The development of this subject is due mainly to Thomson and Tait [Natural Philosophy, Art. 320] and to Kirchhoff [‘Ueber die Bewegung eines Rotationskörpers in einer Flüssigkeit’, Crelle, lxxi. 237 (1869); Mechanik, c. xix]. … it appeared that the whole effect of the fluid might be represented by an addition to the inertia of the solid. The same result will be found to hold in general, provided we use the term ‘inertia’ in a somewhat extended sense.’ – Sir Horace Lamb, Hydrodynamics, Cambridge University Press, 6th ed., 1932, p. 160. (Hence, the gauge boson radiation of the gravitational field causes inertia. This is also explored in the works of Drs Rueda and Haisch: see http://arxiv.org/abs/physics/9802031http://arxiv.org/abs/gr-qc/0209016 , http://www.calphysics.org/articles/newscientist.html and http://www.eurekalert.org/pub_releases/2005-08/ns-ijv081005.php .)

So the Feynman problem with virtual particles in the spacetime fabric retarding motion does indeed cause the FitzGerald-Lorentz contraction, just as they cause the radial gravitationally produced contraction of distances around any mass (equivalent to the effect of the pressure of space squeezing things and impeding accelerations). What Feynman thought may cause difficulties is really the mechanism of inertia!

Einstein’s greatest achievement, proof for the fabric of space in general relativity, known as the dielectric of the vacuum in electronics and called continuum by Einstein in his inaugural lecture at Leyden University in 1920, is a neglected concept (while Einstein’s proof for the properties of the fabric of space is theoretical, Ivor Catt and others developed experimental proof while working with sampling oscilloscopes and pulse generators on the electromagnetic interconnection of IC’s). Notice that air pressure is 10 metric tons per square metre but people don’t ridicule that they can’t feel the 14.7 pounds per square inch. People ridicule the idea that gravity is a pushing effect, claiming it that were so then an umbrella would somehow stop it (despite the fact that x-rays and other radiation penetrate unbrellas, showing they are mainly void). These people ignore the fact that the same false argument would equally ‘disprove’ pulling gravity ideas, since standing on the umbrella would equally stop ‘attraction’ …

‘Recapitulating, we may say that according to the general theory of relativity, space is endowed with physical qualities... According to the general theory of relativity space without ether is unthinkable.’ – Albert Einstein, Leyden University lecture on ‘Ether and Relativity’, 1920. (Einstein, A., Sidelights on Relativity, Dover, New York, 1952, pp. 15, 16, and 23.)

‘The Michelson-Morley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus…. The great stumbing-block for a philosophy which denies absolute space is the experimental detection of absolute rotation.’ – Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919), MA, MSc, FRS, Space Time and Gravitation: An Outline of the General Relativity Theory, Cambridge University Press, Cambridge, 1921, pp. 20, 152.

‘Some distinguished physicists maintain that modern theories no longer require an aether… I think all they mean is that, since we never have to do with space and aether separately, we can make one word serve for both, and the word they prefer is ‘space’.’ – A.S. Eddington, ‘New Pathways in Science’, v2, p39, 1935.

‘Looking back at the development of physics, we see that the ether, soon after its birth, became the enfant terrible of the family of physical substances. … We shall say our space has the physical property of transmitting waves and so omit the use of a word we have decided to avoid. The omission of a word from our vocabulary is of course no remedy; the troubles are indeed much too profound to be solved in this way. Let us now write down the facts which have been sufficiently confirmed by experiment without bothering any more about the ‘e---r’ problem.’ – Albert Einstein and Leopold Infeld, Evolution of Physics, 1938, pp. 184-5; written quickly to get Jewish Infeld out of Nazi Germany and accepted as a worthy refugee in America.

So the contraction of the Michelson-Morley instrument made it fail to detect absolute motion. This is why special relativity needs replacement with a causal general relativity:

‘… with the new theory of electrodynamics [vacuum filled with virtual particles] we are rather forced to have an aether.’ – Paul A. M. Dirac, ‘Is There an Aether?,’ Nature, v168, 1951, p906. (If you have a kid playing with magnets, how do you explain the pull and push forces felt through space? As ‘magic’?) See also Dirac’s paper in Proc. Roy. Soc. v.A209, 1951, p.291.

‘It seems absurd to retain the name ‘vacuum’ for an entity so rich in physical properties, and the historical word ‘aether’ may fitly be retained.’ – Sir Edmund T. Whittaker, A History of the Theories of the Aether and Electricity, 2nd ed., v1, p. v, 1951.

‘It has been supposed that empty space has no physical properties but only geometrical properties. No such empty space without physical properties has ever been observed, and the assumption that it can exist is without justification. It is convenient to ignore the physical properties of space when discussing its geometrical properties, but this ought not to have resulted in the belief in the possibility of the existence of empty space having only geometrical properties... It has specific inductive capacity and magnetic permeability.’ - Professor H.A. Wilson, FRS, Modern Physics, Blackie & Son Ltd, London, 4th ed., 1959, p. 361.

‘Scientists have thick skins. They do not abandon a theory merely because facts contradict it. They normally either invent some rescue hypothesis to explain what they then call a mere anomaly or, if they cannot explain the anomaly, they ignore it, and direct their attention to other problems. Note that scientists talk about anomalies, recalcitrant instances, not refutations. History of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But such accounts are fabricated long after the theory had been abandoned. ... What really count are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes. Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. ... Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. The history of science refutes both Popper and Kuhn: on close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.’ – Imre Lakatos, Science and Pseudo-Science, pages 96-102 of Godfrey Vesey (editor), Philosophy in the Open, Open University Press, Milton Keynes, 1974.

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

The physical content of GR is the OPPOSITE of SR:

Notice that in SR, there is no mechanism for mass, but the Standard Model says the mass has a physical mechanism: the surrounding Higgs field. When you move a fundamental particle in the Higgs field, and approach light speed, the Higgs field has less and less time to flow out of the way, so it mires the particle more, increasing its mass. You can't move a particle at light speed, because the Higgs field would have ZERO time to flow out of the way (since Higgs bosons are limited to light speed themselves), so inertial mass would be infinite. The increase in mass due to a surrounding fluid is known in hydrodynamics:

‘In this chapter it is proposed to study the very interesting dynamical problem furnished by the motion of one or more solids in a frictionless liquid. The development of this subject is due mainly to Thomson and Tait [Natural Philosophy, Art. 320] and to Kirchhoff [‘Ueber die Bewegung eines Rotationskörpers in einer Flüssigkeit’, Crelle, lxxi. 237 (1869); Mechanik, c. xix]. … it appeared that the whole effect of the fluid might be represented by an addition to the inertia of the solid. The same result will be found to hold in general, provided we use the term ‘inertia’ in a somewhat extended sense.’ – Sir Horace Lamb, Hydrodynamics, Cambridge University Press, 6th ed., 1932, p. 160. (Hence, the gauge boson radiation of the gravitational field causes inertia. This is also explored in the works of Drs Rueda and Haisch: see http://arxiv.org/abs/physics/9802031 http://arxiv.org/abs/gr-qc/0209016 , http://www.calphysics.org/articles/newscientist.html and http://www.eurekalert.org/pub_releases/2005-08/ns-ijv081005.php .)

So the Feynman problem with virtual particles in the spacetime fabric retarding motion does indeed cause the FitzGerald-Lorentz contraction, just as they cause the radial gravitationally produced contraction of distances around any mass (equivalent to the effect of the pressure of space squeezing things and impeding accelerations). What Feynman thought may cause difficulties is really the mechanism of inertia!

In his essay on general relativity in the book ‘It Must Be Beautiful’, Penrose writes: ‘… when there is matter present in the vicinity of the deviating geodesics, the volume reduction is proportional to the total mass that is surrounded by the geodesics. This volume reduction is an average of the geodesic deviation in all directions … Thus, we need an appropriate entity that measures such curvature averages. Indeed, there is such an entity, referred to as the Ricci tensor …’ Feynman discussed this simply as a reduction in radial distance around a mass of (1/3)MG/c2 = 1.5 mm for Earth. It’s such a shame that the physical basics of general relativity are not taught, and the whole thing gets abstruse. The curved space or 4-d spacetime description is needed to avoid Pi varying due to gravitational contraction of radial distances but not circumferences.

The velocity needed to escape from the gravitational field of a mass (ignoring atmospheric drag), beginning at distance x from the centre of mass, by Newton’s law will be v = (2GM/x)1/2, so v2 = 2GM/x. The situation is symmetrical; ignoring atmospheric drag, the speed that a ball falls back and hits you is equal to the speed with which you threw it upwards (the conservation of energy). Therefore, the energy of mass in a gravitational field at radius x from the centre of mass is equivalent to the energy of an object falling there from an infinite distance, which by symmetry is equal to the energy of a mass travelling with escape velocity v.

By Einstein’s principle of equivalence between inertial and gravitational mass, this gravitational acceleration field produces an identical effect to ordinary motion. Therefore, we can place the square of escape velocity (v2 = 2GM/x) into the Fitzgerald-Lorentz contraction, giving g = (1 – v2/c2)1/2 = [1 – 2GM/(xc2)]1/2.

However, there is an important difference between this gravitational transformation and the usual Fitzgerald-Lorentz transformation, since length is only contracted in one dimension with velocity, whereas length is contracted equally in 3 dimensions (in other words, radially outward in 3 dimensions, not sideways between radial lines!), with spherically symmetric gravity. Using the binomial expansion to the first two terms of each:

where for spherical symmetry ( x = y = z = r), we have the contraction spread over three perpendicular dimensions not just one as is the case for the FitzGerald-Lorentz contraction: x/x0 + y/y0 + z/z0 = 3r/r0. Hence the radial contraction of space around a mass is r/r0 = 1 – GM/(xc2) = 1 – GM/[(3rc2]

Therefore, clocks slow down not only when moving at high velocity, but also in gravitational fields, and distance contracts in all directions toward the centre of a static mass. The variation in mass with location within a gravitational field shown in the equation above is due to variations in gravitational potential energy. The contraction of space is by (1/3) GM/c2.

This is the 1.5-mm contraction of earth’s radius Feynman obtains, as if there is pressure in space. An equivalent pressure effect causes the Lorentz-FitzGerald contraction of objects in the direction of their motion in space, similar to the wind pressure when moving in air, but without viscosity. Feynman was unable to proceed with the LeSage gravity and gave up on it in 1965. However, we have a solution…

‘Recapitulating, we may say that according to the general theory of relativity, space is endowed with physical qualities... According to the general theory of relativity space without ether is unthinkable.’ – Albert Einstein, Leyden University lecture on ‘Ether and Relativity’, 1920. (Einstein, A., Sidelights on Relativity, Dover, New York, 1952, pp. 15, 16, and 23.)

‘The Michelson-Morley experiment has thus failed to detect our motion through the aether, because the effect looked for – the delay of one of the light waves – is exactly compensated by an automatic contraction of the matter forming the apparatus…. The great stumbing-block for a philosophy which denies absolute space is the experimental detection of absolute rotation.’ – Professor A.S. Eddington (who confirmed Einstein’s general theory of relativity in 1919), MA, MSc, FRS, Space Time and Gravitation: An Outline of the General Relativity Theory, Cambridge University Press, Cambridge, 1921, pp. 20, 152.

‘It has been supposed that empty space has no physical properties but only geometrical properties. No such empty space without physical properties has ever been observed, and the assumption that it can exist is without justification. It is convenient to ignore the physical properties of space when discussing its geometrical properties, but this ought not to have resulted in the belief in the possibility of the existence of empty space having only geometrical properties... It has specific inductive capacity and magnetic permeability.’ - Professor H.A. Wilson, FRS, Modern Physics, Blackie & Son Ltd, London, 4th ed., 1959, p. 361.

‘All charges are surrounded by clouds of virtual photons, which spend part of their existence dissociated into fermion-antifermion pairs. The virtual fermions with charges opposite to the bare charge will be, on average, closer to the bare charge than those virtual particles of like sign. Thus, at large distances, we observe a reduced bare charge due to this screening effect.’ – I. Levine, D. Koltick, et al., Physical Review Letters, v.78, 1997, no.3, p.424.

If the electron moves at speed v as a whole in a direction orthogonal (perpendicular) to the plane of the spin, then the c speed of spin will be reduced according to Pythagoras: v2 + x2 = c2 where x is the new spin speed. For v = 0 this gives x = c. What is interesting is that this model gives rise to the Lorentz-FitzGerald transformation naturally, because: x = c(1 - v2 / c2 )1/2 . Since all time is defined by motion, this (1 - v2 / c2 )1/2 factor of reduction of fundamental particle spin speed is therefore the time-dilation factor for the electron when moving at speed v.

Motl's quibbles about the metric of SR is just ignorance. The contraction is a physical effect as shown above, with length contraction in direction of motion, mass increase and time dilation having physical causes. The equivalence principle and the contraction physics of spacetime "curvature" are the advances of GR. GR is a replacement of the false SR which gives wrong answers for all real (curved) motions since it can't deal with acceleration: the TWINS PARADOX.

Strangely, the ‘critics’ are ignoring the consensus on where LQG is a useful approach, and just trying to ridicule it. In a recent post on his blog, for example, Motl states that special relativity should come from LQG. Surely Motl knows that GR deals better with the situation than SR, which is a restricted theory that is not even able to deal with the spacetime fabric (SR implicitly assumes NO spacetime fabric curvature, to avoid acceleration!).

When asked, Motl responds by saying Dirac’s equation in QFT is a unification of SR and QM. What Motl doesn’t grasp is that the ‘SR’ EQUATIONS are the same in GR as in SR, but the background is totally different:

‘The special theory of relativity … does not extend to non-uniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant). …’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916.

Newton’s laws of motion and gravity expressed in general relativity, together with Maxwell’s equations of electromagnetism, are the core of classical physics. Einstein’s general relativity was inspired by a failure of special relativity to deal with accelerations (the ‘twins paradox’) and gravity. Einstein’s equivalence principle argues that inertial and gravitational forces are equivalent.

In 1917, Einstein applied general relativity to the universe, claiming to model the static universe idea. He simply added a ‘cosmological constant’ term that made gravity strength fall faster than the inverse square law, becoming zero at the average inter-galactic distance and repulsive at greater distances. He claimed this would keep all galaxies at the same average distance from one another. Later it was proved (1) that Einstein’s model was unstable and would lead to galaxies clumping together into compressed lumps over time, and (2) that the universe is not static. The failure of the prediction of the static universe was exposed by the discovery of strong evidence for a big bang. Despite this, speculative efforts by Kaluza and Klein were made to unify forces using extra dimensions and later ‘strings’. This is refuted by Danny Ross Lunsford:

D.R. Lunsford has a paper on ‘Gravitation and Electrodynamics over SO(3,3)’ on

CERN document server, EXT-2003-090: ‘an approach to field theory is developed in which matter appears by interpreting source-free (homogeneous) fields over a 6-dimensional space of signature (3,3), as interacting (inhomogeneous) fields in spacetime. The extra dimensions are given a physical meaning as ‘coordinatized matter’. The inhomogeneous energy-momentum relations for the interacting fields in spacetime are automatically generated by the simple homogeneous relations in 6-D. We then develop a Weyl geometry over SO(3,3) as base, under which gravity and electromagnetism are essentially unified via an irreducible 6-calibration invariant Lagrange density and corresponding variation principle. The Einstein-Maxwell equations are shown to represent a low-order approximation, and the cosmological constant must vanish in order that this limit exist.’ Lunsford begins with an enlightening overview of attempts to unify electromagnetism and gravitation:

‘The old goal of understanding the long-range forces on a common basis remains a compelling one. The classical attacks on this problem fell into four classes:

‘1. Projective theories (Kaluza, Pauli, Klein)

‘2. Theories with asymmetric metric (Einstein-Mayer)

‘3. Theories with asymmetric connection (Eddington)

‘4. Alternative geometries (Weyl)

‘All these attempts failed. In one way or another, each is reducible and thus any unification achieved is purely formal. The Kaluza theory requires an ad hoc hypothesis about the metric in 5-D, and the unification is non-dynamical. As Pauli showed, any generally covariant theory may be cast in Kaluza’s form. The Einstein-Mayer theory is based on an asymmetric metric, and as with the theories based on asymmetric connection, is essentially algebraically reducible without additional, purely formal hypotheses.

‘Weyl’s theory, however, is based upon the simplest generalization of Riemannian geometry, in which both length and direction are non-transferable. It fails in its original form due to the non-existence of a simple, irreducible calibration invariant Lagrange density in 4-D. One might say that the theory is dynamically reducible. Moreover, the possible scalar densities lead to 4th order equations for the metric, which, even supposing physical solutions could be found, would be differentially reducible. Nevertheless the basic geometric conception is sound, and given a suitable Lagrangian and variational principle, leads almost uniquely to an essential unification of gravitation and electrodynamics with the required source fields and conservation laws.’ Again, the general concepts involved are very interesting: ‘from the current perspective, the Einstein-Maxwell equations are to be regarded as a first-order approximation to the full calibration-invariant system.

‘One striking feature of these equations that distinguishes them from Einstein’s equations is the absent gravitational constant – in fact the ratio of scalars in front of the energy tensor plays that role. This explains the odd role of G in general relativity and its scaling behaviour. The ratio has conformal weight 1 and so G has a natural dimensionfulness that prevents it from being a proper coupling constant – so the theory explains why general relativity, even in the linear approximation and the quantum theory built on it, cannot be regularised.’ [Lunsford goes on to suggest gravity is a residual of the other forces, which is one way to see it.]

Danny Ross Lunsford’s major paper, published in Int. J. Theor. Phys., v 43 (2004), No. 1, pp.161-177, was submitted to arXiv.org but was removed from arXiv.org by censorship apparently since it investigated a 6-dimensional spacetime which again is not exactly worshipping Witten’s 10/11 dimensional M-theory. It is however on the CERN document server at

‘… We see now that we are in trouble in 4-d. The first three [dimensions] will lead to 4th order differential equations in the metric. Even if these may be differentially reduced to match up with gravitation as we know it, we cannot be satisfied with such a process, and in all likelihood there is a large excess of unphysical solutions at hand. … Only first in six dimensions can we form simple rational invariants that lead to a sensible variational principle. The volume factor now has weight 3, so the possible scalars are weight -3, and we have the possibilities [equations]. In contrast to the situation in 4-d, all of these will lead to second order equations for the g, and all are irreducible - no arbitrary factors will appear in the variation principle. We pick the first one. The others are unsuitable … It is remarkable that without ever introducing electrons, we have recovered the essential elements of electrodynamics, justifying Einstein’s famous statement …’

In the October 1996 issue letters page of Electronics World the basic mechanism was first released, with further notices placed in the June 1999 and January 2001 issues. Two articles in the August 2002 and April 2003 issues, were followed by letters in various issues. In 2004, the result

r = rlocale3 was obtained using the mass continuity equation of hydrodynamics and the Hubble law, allowing for the higher density of the earlier time big bang universe with increasing distance (divergence in spacetime or redshift of gauge bosons, prevents the increase in effective observable density from going to infinity with increasing distance/time past!). In 2005, a radiation pressure-based calculation was added and many consequences were worked out. The first approach worked on is the ‘alternative proof’ below, the fluid spacetime fabric: the fabric of spacetime described by the Feynman path integrals can be usefully modelled by the ‘spin foam vacuum’ of ‘loop quantum gravity’.

The observed supernova dimming was predicted via the Oct 96 Electronics World magazine, ahead of discovery by Perlmutter, et al. The omitted mechanism (above) from general relativity does away with ‘dark energy’ by showing that gravity generated by the mechanism of expansion does now slow down the recession. In addition, it proves that the ‘critical density’ obtained by general relativity ignoring the gravity mechanism above is too high by a factor of half the cube of mathematical constant e, in other words a factor of 10. The prediction was not published in PRL, Nature, CQG, etc., because of bigotry toward ‘alternatives’ to vacuous string theory.

2. Quantum mechanics and electromagnetism

Equations of Maxwell’s ‘displacement current’ in a vacuum, Schroedinger’s time-dependent waves in space, and Dirac.

‘I think the important and extremely difficult task of our time is to try to build up a fresh idea of reality.’ – W. Pauli, letter to Fierz, 12 August 1948.

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 German publication, ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation of quantum mechanics.’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303. (Note statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum is full of gauge bosons carrying momentum like light, and exerting vast pressure; this gives the foam vacuum.)

‘... the view of the status of quantum mechanics which Bohr and Heisenberg defended - was, quite simply, that quantum mechanics was the last, the final, the never-to-be-surpassed revolution in physics ... physics has reached the end of the road.’ – Sir Karl Popper, Quantum Theory and the Schism in Physics, Rowman and Littlefield, NJ, 1982, p. 6.

‘To try to stop all attempts to pass beyond the present viewpoint of quantum physics could be very dangerous for the progress of science and would furthermore be contrary to the lessons we may learn from the history of science … Besides, quantum physics … seems to have arrived at a dead end. This situation suggests strongly that an effort to modify the framework of ideas in which quantum physics has voluntarily wrapped itself would be valuable …’ – Professor Louis de Broglie, Foreword to Dr David Bohm’s book, Causality and Chance in Modern Physics, Routledge and Kegan Paul, London, 2nd ed., 1984, p. xiv.

‘Niels Bohr brain-washed a whole generation of physicists into believing that the problem had been solved fifty years ago.’ – Murray Gell-Mann, in The Nature of the Physical Universe, Wiley, New York, 1979, p. 29.

STRING THEORY: Every age has scientists abusing personal pet speculations without evidence, relying on mathematics to dupe the public. Maxwell had the mechanical aether, Lord Kelvin the vortex atom, J.J. Thomson the plum pudding atom. Wolfgang Pauli called this kind of thing ‘not even wrong’.

Maxwell's displacement current states: displacement current = [permittivity of space].dE/dt. This is similar in a sense to the time-dependent quantum mechanical Schroedinger equation: H[psi] = (0.5ih/pi).(d[psi]/dt). Here H is the hamiltonian,[psi] is the wave function, i = (-1)^0.5, & pi = 3.14... The product H[psi] determines energy transfer. This Schrodinger equation thus says that energy transfer occurs in proportion to the rate of change of the wave function, just as the displacement current equation says that electric energy transfer occurs in proportion to the rate of change of electric field strength.

Dirac's equation [25] is not as complex or mysterious as claimed; it is just a relativistic time-dependent Schroedinger equation which states that the rate of change of the wave function with time is directly proportional to the energy delivered [26]. This is what Maxwell's displacement current term expresses physically: the flow of energy from one charging conductor to the other. The electromagnetic field strengths are directly related to the quantum mechanical wavefunction. If you charge one capacitor plate first, the other plate is then charged by induction which means that some energy flows to the other plate, beginning to arrive after a delay time depending on its distance from the first plate. However, Catt ignores this because in all of Catt's capacitor/transmission line examples, both conductors are charged by the same amount (of opposite charge) simultaneously, so they exchange equal energy with one another. The point is, there is a direct connection between the "classical" electromagnetic energy exchange and the force mechanism of quantum field theory and quantum mechanics. You have to appreciate that Catt will ignore people if errors or new developments come along quietly, but will stand up and shout them down if they are pushed, so he won't let anything happen at all by any means, unless it is within his own framework of (sometimes wrong) ideas.

Dr Arnold Lynch worked for BT on microwave transmission and interference, and wanted to know what happens to electromagnetic energy when it apparently "cancels out" by interference. Because energy is conserved, so you can't cancel out energy although you can indeed cancel out the electromagnetic fields. This is the case of course with all matter, where opposite charges combine in atoms to give neutral atoms, and they magnetically pair up with opposite spins (Pauli exclusion principle) so that there is usually no net magnetism.

The contraction of materials only in the direction of their motion through the physical fabric of space, and their contraction due to the space pressure of gravity in the outward (radial) direction from the centre of a mass indicates a physical nature of space consistent with the 377 ohm property of the vacuum in electronics. Feynman’s approach to quantum electrodynamics, showing that interference creates the illusion that light always travels along the shortest route, accords with this model of space. However, Feynman fails to examine radio wave transmission, which cannot be treated by quantum theory as the waves are continuous and of macroscopic size easily examined and experimented with. The emission of radio is due to the accelerations of electrons as the electric field gradient varies in the transmitter aerial. Because electrons are naturally spinning, even still electrons have centripetal acceleration and emit energy continuously. The natural exchange of such energy creates a continuous, non-periodic equilibrium that is only detectable as electromagnetic forces. Photon emission as described by Feynman is periodic emission of energy. Thus in a sheet of glass there are existing energy transfer processes passing energy around at light speed before light enters. The behaviour of light therefore depends on how it is affected by the existing energy flow inside the glass, which depends on its thickness. Feynman explains in his 1985 book QED that ‘When a photon comes down, it interacts with electrons throughout the glass, not just on the surface. The photon and electrons do some kind of dance, the net result of which is the same as if the photon hit only the surface.’ Feynman in the same book concedes that his path-integrals approach to quantum mechanics explains the chaos of the atomic electron as being simply a Bohm-type interference phenomenon: ‘when the space through which a photon moves becomes too small (such as the tiny holes in the screen) … we discover that … there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that … interference becomes very important.’ Thus Feynman suggests that a single hydrogen atom (one electron orbiting a proton, which can never be seen without an additional particle as part of the detection process) would behave classically, and it is the presence of a third particle (say in the measuring process) which interrupts the electron orbit by interference, creating the 3+ body chaos of the Schroedinger wave electron orbital.

QFT is heuristically explained with a classical model of a polarised virtual charge dielectric

Bootstrap model with a big TOE, booted of course, accelerated at high energy towards M-theory

LeSage gravity mechanism corrected

Georges Louis LeSage, between 1747-82, explained gravity classically as a shadowing effect of space pressure by masses. The speculative, non-quantitative mechanism was published in French and is

available online (G.L. LeSage, Lucrece Newtonien,Nouveaux Memoires De L’Academie Royal de Sciences et Belle Letters, 1782, pp. 404-31). Because gravity depends on the mass within the whole earth’s volume, LeSage predicted that the atomic structure was mostly void, a kind of nuclear atom which was confirmed by Rutherford’s work in 1911.

Taking things simply, the virtual vacuum surrounding each charge core is polarised, which screens the core charge. This is geometrical. The virtual positron-electron pairs in the vacuum are polarised: the virtual positive charges are attracted closer to the negative core than the virtual electrons, which are repelled to greater distances. Hence the real negative core has a positive virtual shell just around it, with a negative virtual shell beyond it, which falls off to neutral at great distances. This virtual particle or heuristic (trial and error) explanation is used in the Feynman approach to quantum field theory, and was validated experimentally in 1997, by firing leptons together at high energy to penetrate the virtual shield and observe the greater charge nearer the bare core of an electron.

Some 99.27 % of the inward-directed electric field from the electron core is cancelled by the outward-directed electric field due to the shells of virtual charges polarised in the vacuum by the electron core. Traditionally, the normal mathematics of quantum field theory has had the issue of having to be ‘renormalised’ to stop the electron core from interacting with an infinite number of virtual charges. The renormalisation process force-fits limits the size of the integral for each coupling correction, which would otherwise be infinity. Heuristically, renormalisation is limiting each coupling correction (Feynman diagram) to one virtual charge at one time. Hence, for the first coupling correction (which predicts the electron’s magnetism right to 5 decimals or 6 significant figures), the electron core charge is weakened by the polarised charge (positron shell) and is 137 times weaker when associating with 1 virtual electron in the space around the positive shell. The paired magnetic field is 1 + 1/(2.Pi.137) = 1.00116 Bohr magnetons, first term is the unshielded magnetism of the real electron core, and the second is the contribution from the paired virtual electron in the surrounding space, allowing for the transverse direction of the core magnetic field lines around the electron loop equator (the magnetic field lines are radial at the poles). My understanding now is that the transverse magnetic field surrounding the core of the electron is shielded by the 137 factor, and it is this shielded transverse field which couples with a virtual electron. The radial magnetic field lines emerging from the electron core poles are of course not attenuated, since they don’t cross electric field lines in the polarised vacuum, but merely run parallel to electric field lines. (This is a large step forward in heuristic physics from that a couple of weeks back.)

The pairing is the Pauli-exclusion process. Because an electron has a spin, it is a magnet. Every two adjacent electrons in the atom have opposite spin directions (up or down). There are two natural ways tou can put two magnets together, end to end or side to side. The side to side arrangement, with one North pole facing up and the other down, is most stable, so it occurs in the atom where the electrons are in chaotic orbits. The only way you can measure the spin of an electron is by using a magnetic field, which automatically aligns the electron, so the spin can only take two possible values (up or down), so the magnetism is either adding to or subtracting from the background field. You can flip the electron over by adding the energy needed for it to add to the magnetic field. None of this is mystical, any more than playing with magnets and finding they naturally align in certain (polar) ways only. The Pauli exclusion principle states that the four quantum numbers (including spin) are unique for every electron in the atom. Spin was the last quantum number to be accepted.

In order to heuristically explain the abstruse 1 + 1/(2.Pi.137) = 1.00116 first coupling correction for the electron’s magnetism in QED, we suggested on Motl’s blog that the electron core magnetism is not attenuated by the polarised vacuum of space, while the electric field is attenuated by a factor of 137. The 2.Pi factor comes from the way a virtual electron in the vacuum couples with the real core electron, both of which are spinning. (Magnetism is communicated via the spin of virtual particles in the vacuum, according to Maxwell’s electromagnetism.) The coupling is related to the mechanism of the Pauli exclusion principle. The coupling is weakened by the 137 factor because the polarisation of virtual charges creates an inner virtual positron shell around the real electron core, with an outer virtual electrons shell. The polarised vacuum shields the core charge by a factor of 137.

The extra mass-energy of a muon means that interacts not only with virtual electrons and positrons, but also more energetic virtual particles in the vacuum. This very slightly affects the measured magnetic moment of the muon, since it introduces extra coupling corrections that don’t occur for an electron.

Could it be that the effect on the electron’s mass is greater for the same reason, but that the effect for mass is greater than magnetic field, because it doesn’t involve the 137-attenuation factor? Somehow you get the feeling that we are going towards a ‘bootstrap’ physics approach; the muon is more 207 times more massive than the electron, because the greater mass causes it to interact more with the spacetime fabric, which adds mass! (‘I pulled myself upward by my own bootstraps.’) I’ll come back to this at the end of this paper, with a list of tested predictions of particle masses that it yields.

The gravity mechanism has been applied to electromagnetism that has both attractive and repulsive forces, and nuclear attractive forces. These are all powered by the gravity mechanism in a simple way. Spinning charges in heuristic quantum field theory all radiate and exchange energy as virtual photons, which gets red-shifted when travelling large distances in the universe, due to the big bang. As a result, the exchange of energy between nearby similar charges, where the expansion of the universe does not occur between the charges, is strong and they recoil apart (repulsion), like two people accelerating in opposite directions due to exchanging streams of lead bullets from machine-guns! (Thank God for machine guns and big bangs, or physics would seem daft.) As a virtual photon leaves any electron, the electron must recoil, like a rifle firing a bullet. According to the uncertainty principle, the range of the virtual photon is half its wavelength. Since the inverse-square law is simple geometric divergence (of photons over increasing areas) with no range limit (infinite range), the wavelength of the virtual photons in electromagnetism is infinite. Hence, they are continuous energy flow, not oscillating. This is why you can’t hear steady electromagnetic forces on a radio: there is no oscillation to jiggle the electrons and introduce a resonate current. (Planck’s formula E = hf implies that zero net energy is carried when f = 0, which is due to the Prevost exchange mechanism of 1792 that also applies to quantum energy exchange at constant temperatures, where cooling objects are in equilibrium, receiving as much as they radiate each second.) When we accelerate a charge, we then get a detectable photon with a definite frequency. The spin of a loop electron is continuous not a periodic phenomena so it radiates energy with no frequency, just like a trapped electric TEM wave in a capacitor plate.

Electric attraction occurs between opposite charges, which stop virtual photons from each other’s direction, and so are pushed together like gravity, but the force is multiplied up from gravity by a factor of about 10^40, due to the drunkard’s walk (statistical zig-zag path) of energy between similar charges in the universe. This ‘displacement current’ of electromagnetic energy can’t travel in a straight line or it will statistically encounter similar numbers of equal and opposite charges, cancelling out the net electric field. Thus mathematical physics only permits a drunkard’s walk, in which the sum is gravity times the square root of the number of similar charges in the universe. A diagram here http://members.lycos.co.uk/nigelbryancook/Image11.jpg proves that the electric repulsion force is equal to the attraction force for equal charges, but has opposite directions depending on whether the two charges are similar in sign or different:

Hence F(electromagnetism) = mMGN1/2/r2 = q1q2/(4

pe r2) (Coulomb’s law)

where G = ¾ H2/(

pr e3) as proved above, and N is as a first approximation the mass of the universe (4p R3r /3= 4p (c/H)3r /3) divided by the mass of a hydrogen atom. This assumes that the universe is hydrogen. In fact it is 90% hydrogen by atomic abundance as a whole, although less near stars (only 70% of the solar system is hydrogen, due to fusion of hydrogen into helium, etc.). Another problem with this way of calculating N is that we assume the fundamental charges to be electrons and protons, when in fact protons contain two up quarks (each +2/3) and one downquark (-1/3), so there are twice as many fundamental particles. However, the quarks remain close together inside a nucleon and behave for most electromagnetic purposes as a single fundamental charge. With these approximations, the formulae above yield a prediction of the strength factor e in Coulomb’s law of:

e

= qe2e2.7…3 [r /(12p me2mprotonHc3)]1/2 F/m.

Testing this with the PRL and other data used above (

r = 4.7 x 10-28 kg/m3 and H = 1.62 x 10-18 s-1 for 50 km.s-1Mpc-1), gives e = 7.4 x 10-12 F/m which is only 17% low as compared to the measured value of 8.85419 x 10-12 F/m. This relatively small error reflects the hydrogen assumption and quark effect. Rearranging this formula to yield r , and rearranging also G = ¾ H2/(pr e3) to yield r allows us to set both results for r equal and thus to isolate a prediction for H, which can then be substituted into G = ¾ H2/(pr e3) to give a prediction for r which is independent of H:

Again, these predictions of the Hubble constant and the density of the universe from the force mechanisms assume that the universe is made of hydrogen, and so are first approximations. However they clearly show the power of this mechanism-based predictive method.

In the 1960s while at Motorola, Catt (born 1935, B.Eng. 1959) charged up a 1 m length of coaxial cable to 10 volts, and then discharged it, measuring with a Tektronix 661sampling osclloscope with 4S1 and 4S2 (100 picosecond) plug-ins, finding an output of a 2 m long 5 v pulse. In any static charge, the energy is found to be moving at the speed of light for the adjacent insulator; when discharged, the 50% of the energy already moving towards the exit point leaves first, while the remaining 50% first goes in the opposite direction, reflects back off the far edge, and then exits, creating a pulse of half the voltage and twice the duration needed light to transit the length. Considering a capacitor reduced to simply two oppositely charged particles separated by a vacuum, e.g., an atom, we obtain the particle spin speed.

So the electromagnetic energy of charge is trapped at light speed in any ‘static’ charge situation. David Ash, BSc, and Peter Hewitt, MA, in their 1994 book reviewing electron spin ideas, The Vortex (Gateway, Bath, page 33), stated: ‘… E = mc2 shows that mass (m) is equivalent to energy (E). The vortex goes further: it shows the precise form of energy in matter. A particle of matter is a swirling ball of energy … Light is a different form of energy, but it is obvious from Einstein’s equation that matter and light share a common movement. In E = mc2, it is c, the speed of light, which related matter to energy. From this, we can draw a simple conclusion. It is obvious: the speed of movement in matter must be the speed of light.’ However, Ash and Hewitt don’t tackle the big issue: ‘It had been an audacious idea that particles as small as electrons could have spin and, indeed, quite a lot of it. … the ‘surface of the electron’ would have to move 137 times as fast as the speed of light. Nowadays such objections are simply ignored.’ – Professor Gerard t’Hooft, In Search of the Ultimate Building Blocks, Cambridge University Press, 1997, p27. In addition, quantum mechanical spin, given by Lie’s mathematics, is generally obscure, and different fundamental particles have different spins. Fermions have half-integer spin while bosons have integer spin. Neutrinos and antineutrinos do have a spin around their propagation axis, but the maths of spin for electrons and quarks is obscure. The twisted paper loop, the Mobius strip, illustrates how a particle can have different quantum mechanical spins in a causal way. If you half twist a strip of paper and then glue the ends, forming a loop, the result has only one surface: in the sense that if you draw a continuous line on the looped paper, you find it will cover both sides of the paper! Hence, a Mobius strip must be spun around twice to get back where it began! The same effect would occur in a spinning fundamental particle, where the trapped energy vector rotates while spinning.

Magnetism, in Maxwell’s mechanical theory of spinning virtual particles in space, may be explained akin to vortices, like whirlpools in water. If you have two whirlpools of similar spin (either both being clockwise, or both being anticlockwise), they attract. If the two whirlpools have opposite spins, they repel. In 1927, Samuel Goudsmit and George Uhlenbeck introduced the spin quantum number. But under Bohr’s and Heisenberg’s ‘Machian’ (‘non-observables like atoms and viruses are not real’) paranoid control, it was subsumed into Lie algebra as a mathematical trick, not a physical reality, despite Dirac’s endorsement of the ‘aether’ in predicting antimatter. Apart from the spin issue above that we resolved by the rotation of the Heaviside-Poynting vector like a Mobius strip, there is also the issue that the equator of the classical spherical electron would revolve 137.03597 times faster than light. Taking Ivor Catt’s work, the electron is not a classical sphere at all, but a Heaviside-Poynting energy current trapped gravitationally into a loop, and it goes at light speed, which is the ‘spin’ speed.

If the electron moves at speed v as a whole in a direction orthogonal (perpendicular) to the plane of the spin, then the c speed of spin will be reduced according to Pythagoras: v2 + x2 = c2 where x is the new spin speed. For v = 0 this gives x = c. What is interesting is that this model gives rise to the Lorentz-FitzGerald transformation naturally, because: x = c(1 - v2/ c2 )1/2 . Since all time is defined by motion, this (1 - v2/ c2 )1/2 factor of reduction of fundamental particle spin speed is therefore the time-dilation factor for the electron when moving at speed v. So there is no metaphysics in such ‘time travel’! Mass increase occurs due to the snowplough effect of the fabric of spacetime ahead of the particle, since it doesn’t have time to flow out of the way when the speed is great.

The light photon has a spin angular momentum is cmr where the effective mass m is of course energy equivalent, m = E/c2 (from E = mc2 ). Using Planck’s E = hf = hc/l where f is frequency and l is wavelength (l = 2p r ), we find that the spin angular momentum is cmr = ½ h/p , which is well verified experimentally. Since the unit of atomic angular momentum is ½ h/p , we find the light boson has a spin or 1 unit, or is a spin-1 boson, obeying Bose-Einstein statistics. The electron, however, has only half this amount of spin, so it is like half a photon (the negative electric field oscillation of a 1.022 MeV gamma ray, to be precise). The electron is called a fermion as it obeys Fermi-Dirac statistics, which applies to half-integer spins. (The spins of two fermions can, of course, under some special conditions ‘add up’ to behave as a boson, hence the ‘Bose-Einstein condensate’ at very low temperatures.)

The only widely known attempt to introduce some kind of causal fluid dynamics into quantum mechanics was by Professor David Bohm and Professor J. P. Vigier in their paper ‘Model of the Causal Interpretation of Quantum Theory in Terms of a Fluid with Irregular Fluctuation’ (Physical Review, v 96, 1954, p 208). This paper showed that the Schroedinger equation of quantum mechanics arises as a statistical description of the effects of Brownian motion impacts on a classically moving particle. However, the whole Bohm approach is wrong in detail, as is the attempt of de Broglie (his ‘non-linear wave mechanics’) to guess a classical potential that mimics quantum mechanics on the small scale and deterministic classical mechanics at the other size regime. The whole error here is due to the Poincaré chaos introduced by the three-body problem, which destroys determinism (but not causality) in classical, Newtonian physics:

‘… the ‘inexorable laws of physics’ … were never really there … Newton could not predict the behaviour of three balls … In retrospect we can see that the determinism of pre-quantum physics kept itself from ideological bankruptcy only by keeping the three balls of the pawnbroker apart.’ – Tim Poston and Ian Stewart, Analog, November 1981.

So it is not quantum physics that is the oddity, but actually classical physics. The normal teaching of Newtonian physics at low levels falsely claims that it allows the positions of the planets to be exactly calculated (determinism) when it does not. Newton’s laws do not contain any exact solution for more than two bodies, and there are more than two bodies in our solar system. So the problem to address is the error in classical, Newtonian physics, which explains why quantum mechanics is the way it is. Bohm’s approach was to try to obtain a classical model of quantum mechanics, which is the wrong approach, since classical physics is the fiddle. What you first have to admit is that Newton only dealt with two bodies, so his laws simply don’t apply to reality.

Henri Poincaré’s work shows that in any atom, you will have chaos whenever you observe it, even in the Newtonian mechanics framework. The simplest atom is hydrogen, with an electron going around a proton. As soon as you try to observe it, you must introduce another particle like a photon or electron, which gives rise to a 3-body situation! Therefore the chaotic, statistical behaviour of the situation gives rise to the statistical Schroedinger wave equation of the atom without any need to introduce explanations based on ‘hidden variables’. The only mediation is the force gauge boson, which is well known in quantum field theory, and is not exactly a ‘hidden variable of the sort Bohm looked for. Newton’s error is restricting his theory to the oversimplified case of only two bodies, when in fact this is a bit like Euclidean geometry, missing a vital ingredient. (Sometimes you do really have to deepen the foundations to build a taller structure.)

In 1890, Poincaré published a 270-pages book, On the problem of Three Bodies and the Equations of Dynamics. He showed that two bodies of similar mass have predictable, deterministic orbital motion because their orbits trace out closed, repeating loops in space. But he found that three bodies of similar mass in orbit trace out irregular, continuously changing unclosed loops and tangles throughout a volume of space, not merely in the flat plane they began in. The average radius of a chaotic orbit that is equal to the classical (deterministic) radius, and the probability of finding the particle beyond average radius diminishes, so giving the basis of the Schroedinger model, where the probability of finding the electron peaks at the classical radius and diminishes gradually elsewhere. Computer programs approximate chaotic motion roughly by breaking up a three body problem, ABC, into steps AB, AC, and BC, and then cyclically calculating motions of each pair of bodies brief period of time while ignoring the other body for that brief period. This is not exact, but is a useful approximation for understanding how chaos occurs and what statistical variations are possible over a period of time. It disproves determinism! Because most of the physicists working in quantum mechanics have not studied the mathematical application of chaos to classical atomic electrodynamics, they have no idea that Newtonian physics is crackpot off the billiard table, and can’t describe the solar system in the way it claims, and that the ‘contradiction’ usually presented as existing between classical and quantum physics is not a real contradiction but is down to the falsehood that classical physics is supposed to be deterministic, when it is not.

Feynman explains in his 1985 book QED that ‘When a photon comes down, it interacts with electrons throughout the glass, not just on the surface. The photon and electrons do some kind of dance, the net result of which is the same as if the photon hit only the surface.’ Feynman in the same book concedes that his path-integrals approach to quantum mechanics explains the chaos of the atomic electron as being simply a Bohm-type interference phenomenon: ‘when the space through which a photon moves becomes too small (such as the tiny holes in the screen) … we discover that … there are interferences created by the two holes, and so on. The same situation exists with electrons: when seen on a large scale, they travel like particles, on definite paths. But on a small scale, such as inside an atom, the space is so small that … interference becomes very important.’ Thus Feynman suggests that a single hydrogen atom (one electron orbiting a proton, which can never be seen without an additional particle as part of the detection process) would behave classically, and it is the presence of a third particle (say in the measuring process) which interrupts the electron orbit by interference, creating the 3+ body chaos of the Schroedinger wave electron orbital.

Religious Creed or ‘Confession of Faith’ of a Mainstream Crackpot

I believe in one way forward, string theory.

In 11 dimensions, M-theory was made.

All equations bright and beautiful,

String theory on the Planck scale made them all.

All epicyclical cosmic matter of the darkness,

All ad hoc dark energies of the night,

Are entirely right.

Amen.

‘Teachers of history, philosophy, and sociology of science … are up in arms over an attack by two Imperial College physicists … who charge that the plight of … science stems from wrong-headed theories of knowledge. … Scholars who hold that facts are theory-laden, and that experiments do not give a clear fix on reality, are denounced. … Staff on Nature, which published a cut-down version of the paper after the authors’ lengthy attempts to find an outlet for their views, say they cannot recall such a response from readers. ‘It really touched a nerve,’ said one. There was unhappiness that Nature lent its reputation to the piece.’ – Jon Thurney, Times Higher Education Supplement, 8 Jan 88, p2. [This refers to the paper by T. Theocharis and M. Psimopoulos, ‘Where Science Has Gone Wrong’, Nature, v329, p595, 1987.]

Consider the wonderful exchange between science writer John Horgan (who I’m starting to admire for common sense views on modern physics, although I didn’t much like his attack a while back on a U.S. weapons scientist on the basis that the scientist had unorthodox ideas in other, unrelated areas).

John Horgan (Author of ‘The End of Science’), In Defence of Common Sense: ‘… I feel compelled to deplore one aspect of Einstein’s legacy: the widespread belief that science and common sense are incompatible … quantum mechanics and relativity shattered our common-sense notions about how the world works. [No, no, no: Einstein himself pointed out in a lecture at Leyden University in 1920 that according to general relativity, gravity is caused by ether, the continuum or spacetime fabric (and he dumped special relativity as just a mathematically ‘restricted’ approximation in 1916 when he developed general relativity); he also in 1935 published a paradox in the crazy ‘Copenhagen Interpretation’ of quantum mechanics!]… As a result, many scientists came to see common sense as an impediment to progress … Einstein’s intellectual heirs have long been obsessed [‘interested’ is a more polite term than ‘obsessed’, Horgan!] with finding a single ‘unified’ theory that can embrace quantum mechanics, which accounts for electromagnetism and the nuclear forces [quantum field theory], and general relativity, which describes gravity. … The strings … are too small to be discerned by any buildable instrument, and the parallel universes are too distant. Common sense thus persuades me that these avenues of speculation will turn out to be dead ends. [Right result, but inadequate reasoning!] … ultimately, scientific truth must be established on empirical grounds. [Spot on!]… Einstein … could never fully accept the bizarre implication of quantum mechanics that at small scales reality dissolves into a cloud of probabilities. [ No, no, no: Einstein supported the pilot-wave theory of de Broglie, in which particles cause waves in the surrounding ‘ether’ as they move, causing the wave-type diffraction and uncertainty effects, and Einstein also tried to get Dr David Bohm, the hidden variables theorist, to be his assistant at the Institute of Advanced Study in Princeton, but was blocked by the director Dr Robert Oppenheimer (who was better at blowing people up than being constructive, as Dr Tony Smith has pointed out on Not Even Wrong).]’

Dr Leonard Susskind (Felix Bloch Professor of Theoretical Physics, Stanford University), In Defence of Uncommon Sense: ‘John Horgan … has now come forth to tell us that the world’s leading physicists and cognitive scientists are wasting their time. … Every week I get several angry email messages … [I wonder why?] … as Horgan tells us, it’s a dangerous sea where one can easily lose ones way and go right off the deep end. [Easily??!!??] But great scientists are, by nature, explorers. To tell them to stay within the boundaries of common sense may be like telling Columbus that if he goes more than fifty miles from shore he’ll get hopelessly lost. [So Dr Susskind is like Columbus, which probably means that he will be convinced that he has found Western India, when really he is on a different continent, America.] Besides, good old common sense tells us [Susskind] that the Earth is flat. …’

My quotation of Dr Susskind above omits a lot of more sensible, yet irrelevant, comments. However, like other people who complain that ‘good old common sense tells us that the Earth is flat’, he is not helping physics by defending science fiction (string theory). If I look at pictures of the Earth taken from the Moon, common sense tells me the Earth is approximately spherical. Newton stated it is nearer an oblate spheroid, but others (with more accurate data) show it is ‘pear-shaped’ (which seems ‘common sense’ to those of us with a sense of humour). The fact that there is a horizon in every direction, and that you see a greater distance when you get higher up, suggests that there is some kind of sloping off of the ground in every direction. It was the fact that ships disappeared gradually over the visible horizon, but later returned, that suggested a difficulty in the flat-earth theory. Dr Susskind would have done better by saying that common sense tells us that the sun orbits the earth, but possibly feared absolute motion, preferring the crazy idea Copernicus didn’t work on ‘the solar system’, but had instead discredited absolute motion, paving the way for relativism). I suggest to Dr Susskind and Mr Horgan that they debate ‘causality’ instead of ‘common sense’, and do it in a real forum with plenty of custard pies available for observers to use against the loser. The basic problem is that Dr Susskind is so busy defending prejudice against far-out ideas that he forgets he and other string theorists are creating just a little indirect prejudice against more classical or testable new ideas which might be able to sort out problems in physics.

Dr Peter Woit is a Columbia University mathematician who runs the weblog ‘Not Even Wrong’ about string theory – physicist Pauli deemed speculative belief systems like strings which predict nothing and cannot be tested or checked ‘not even wrong’. He has written a book that will sort out the nonsense in physics.

Update: Lee Smolin has now kindly acknowledged the possibility of using this type of argument (that quantum field theory gauge boson exchange process predicts magnetic moments and Lamb shift, so an attempt to unify the spacetime fabric with Feynman path integrals is an empirically defendable physical reality, unlike ‘string theory’ speculation). This applies for some kind of spin foam vacuum in loop quantum gravity, as mentioned on Peter Woit’s blog. Smolin is committed to the very difficult mathematical approach, but was decent enough say:

Some kind of loop quantum gravity is going to be the right theory, since it is a spin foam vacuum. People at present are obsessed with the particles that string theory deals with, to the exclusion of the force mediating vacuum. Once prejudices are overcome, proper funding of LQG should produce results.

... Thanks also to Nigel for those supporting comments. Of course more support will lead to more results, but I would stress that I don’t care nearly as much that LQG gets more support as that young people are rewarded for taking the risk to develop new ideas and proposals. To go from a situation where a young person’s career was tied to string theory to one in which it was tied to LQG would not be good enough. Instead, what is needed overall is that support for young scientists is not tied to their loyalty to particular research programs set out by we older people decades ago, but rather is on the basis only of the quality of their own ideas and work as well as their intellectual independence. If young people were in a situation where they knew they were to be supported based on their ability to invent and develop new ideas, and were discounted for working on older ideas, then they would themselves choose the most promising ideas and directions. I suspect that science has slowed down these last three decades partly as a result of a reduced level of intellectual and creative independence avaialble to young people.

Thanks,
Lee

Sadly then, Dr Lubos Motl, string ‘theorist’ and assistant professor at harvard, tried to ridicule this aproach by the false claim that Dirac’s quantum field theory disproves a spacetme babric, as it is allegedly a unification of special relativity (which denies spacetime fabric) and quantum mechanics. Motl tried to ridicule me with this, although I had already explained the reason to him! "An important part of all totalitarian systems is an efficient propaganda machine. ... to protect the 'official opinion' as the only opinion that one is effectively allowed to have." - STRING THEORIST Dr Lubos Motl: http://motls.blogspot.com/2006/01/power-of-propaganda.html Here is a summary of the reasons why Dirac’s unification is only for the maths of special relativity, not the principle of no-fabric, and in fact Dirac was an electrical engineer before becoming a theoretical physicist, and later wrote:

‘… with the new theory of electrodynamics [vacuum filled with virtual particles] we are rather forced to have an aether.’ – Paul A. M. Dirac, ‘Is There an Aether?,’ Nature, v168, 1951, p906. (If you have a kid playing with magnets, how do you explain the pull and push forces felt through space? As ‘magic’?) See also Dirac’s paper in Proc. Roy. Soc. v.A209, 1951, p.291.

Thankfully, Peter Woit has retained so far a comment on the discussion post for loop quantum gravity which points out that Motl is wrong:

Lumos has a long list of publications about speculation on unobservables. So I guess he’s well qualified to make vacuous assertions. What I’d like to see debated is the fact that the spin foam vacuum is modelling physical processes KNOWN to exist, as even the string theorists authors of

http://arxiv.org/abs/hep-th/0601129 admit, p14:
‘… it is thus perhaps best to view spin foam models … as a novel way of defining a (regularised) path integral in quantum gravity. Even without a clear-cut link to the canonical spin network quantisation programme, it is conceivable that spin foam models can be constructed which possess a proper semi-classical limit in which the relation to classical gravitational physics becomes clear. For this reason, it has even been suggested that spin foam models may provide a possible ‘way out’ if the difficulties with the conventional Hamiltonian approach should really prove insurmountable.’

Strangely, the ‘critics’ are ignoring the consensus on where LQG is a useful approach, and just trying to ridicule it. In a recent post on his blog, for example, Motl states that special relativity should come from LQG. Surely Motl knows that GR deals better with the situation than SR, which is a restricted theory that is not even able to deal with the spacetime fabric (SR implicitly assumes NO spacetime fabric curvature, to avoid acceleration!).

When asked, Motl responds by saying Dirac’s equation in QFT is a unification of SR and QM. What Motl doesn’t grasp is that the ‘SR’ EQUATIONS are the same in GR as in SR, but the background is totally different:

‘The special theory of relativity … does not extend to non-uniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. Along this road we arrive at an extension of the postulate of relativity… The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant). …’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916.

What a pity Motl can’t understand the distinction and its implications.

Ordinary QM makes no attempt to deal with spacetime or the spacetime fabric, but QFT does.

Dirac's equation is Schrodinger's time-dependent equation with a term for mass-energy added (the E=mc2 result comes from the mass increase Lorentz formula, which is a decade before Einstein derived it from SR, so contrary to Lubos Motl, Dirac is not tied to a no-ether SR).

It is interesting that Maxwell's special term, added to the "Maxwell equations" (Heaviside's equations) for Ampere's law, is ultimately describing the same thing as the time-dependent Schrodinger equation which is the basis of Dirac's equation too. The energy Schrodinger's equation describes is electromagnetic, as is that of Maxwell's equation.

They describe energy by the hamiltonian, while the field or the wave function varies with time. Maxwell's equation is stated as being for 'Displacement current' flowing from one conductor to another in a charging capacitor, across a vacuum dielectric. However, the real process is induction, electromagnetic energy flowing from one conductor to the other.

See <A HREF="http://electrogravity.blogspot.com/2006/01/solution-to-problem-with-maxwells.html">this</A> for a basic situation Maxwell missed.

To get a detailed understanding of QM for spacetime, you need to stop thinking about abstract wavefunctions and field strengths, and rewrite the equations as energy exchange processes (for Feynman diagrams).

Because quantum field theory is more complete than QM, surely Feynman's sum over histories approach (path integrals), introduces spacetime properly into quantum mechanics? I'm sure the QFT equations will be impractical to use for QM, but the principle holds.

You claim that Dirac's theory unifies SR and QM, when in fact Dirac's equation (which is his theory) is an expansion of the time-dependent Schroedinger equation to include the mass-energy result which comes from electromagnetism (there are dozen's of derivations of E=mcs, not merely SR). The time-dependent Schroedinger equation is similar to Maxwell's "displacement current", which actually doesn't describe real electric current but the energy flow in the vacuum when a capacitor or such like charges by induction.

Maxwell's theory of "displacement current" was a spin foam vacuum:

Maxwell’s 1873 Treatise on Electricity and Magnetism, Articles 822-3: ‘The ... action of magnetism on polarised light [discovered by Faraday not Maxwell] leads ... to the conclusion that in a medium ... is something belonging to the mathematical class as an angular velocity ... This ... cannot be that of any portion of the medium of sensible dimensions rotating as a whole. We must therefore conceive the rotation to be that of very small portions of the medium, each rotating on its own axis [spin] ... The displacements of the medium, during the propagation of light, will produce a disturbance of the vortices ... We shall therefore assume that the variation of vortices caused by the displacement of the medium is subject to the same conditions which Helmholtz, in his great memoir on Vortex-motion [of 1858; sadly Lord Kelvin in 1867 without a fig leaf of empirical evidence falsely applied this vortex theory to atoms in his paper ‘On Vortex Atoms’, Phil. Mag., v4, creating a mathematical cult of vortex atoms just like the mathematical cult of string theory now; it created a vast amount of prejudice against ‘mere’ experimental evidence of radioactivity and chemistry that Rutherford and Bohr fought], has shewn to regulate the variation of the vortices [spin] of a perfect fluid.’

Lorentz invariance is as the name suggests Lorentz not SR invariance.

Lorentz invariance is aetherial. Even if you grasp this and start calling the contraction a metaphysical effect unrelated to physical dynamics of the quantum vacuum, you don't get anywhere.

Feynman's innovation was introducing spacetime pictures, because you need to see what you are doing clearly when using mathematics. The increase in the magnetic moment of an electron that Feynman, Schwinger and Tito came up with is 1 + 1/(2.Pi.137), where the first term is from Dirac's theory and the second is the increase due to the first Feynman coupling correction to the vacuum.

The 1/(2.Pi.137) is from a renormalised or cut-off QFT integral, but the heuristic meaning is clear. The core of the electron has a charge 137 times the observed charge, and this is shielded by the polarised vacuum as Koltick's 1997 PRL published experiments confirm (the 1/137 factor changes to 1/128.5 as collision energy goes to 100 GeV or so; at unification energy it would be 1/1 corresponding to completely breaking through the veil of polarised vacuum).

Renormalisation is limiting the interaction physically to 1 vacuum particle rather than an infinite number, and that particle is outside the veil so the association is 137 times weaker at low energies, and the geometry causes a further reduction by 2Pi (because the exposed length of a spinning loop particle seen as a circle is 2Pi times the side-on or diameter size). So that is physically what is behind adding 1/(2Pi.137) 0r 0.00116 to the core's magnetic moment (which is unshielded by the polarised veil, because that only attenuates electric field).

In addition, the same mechanism explains the differing masses for different fundamental particles. If the Standard Model mass causing particle (Higgs field particle) is inside the polarised veil, it experiences the core strength, 137 times Coulomb, and is strongly associated with the particle core, increasing the mass.

But if the Higgs field particle is outside the polarised veil, it is subject to the shielded strength, 137 times less than the core charge, so the coupling is weaker and the effective miring mass by the Higgs field is 137 times weaker.

This idea predicts that a particle core with n fundamental particles (n=1 for leptons, n = 2 for mesons, and obviously n=3 for baryons) coupling to N virtual vacuum particles (N is an integer) will have an associative inertial mass of Higgs bosons of:

(0.511 Mev).(137/2)n(N + 1) = 35n(N + 1) Mev,

where 0.511 Mev is the electron mass. Thus we get everything from this one mass plus integers 1,2,3 etc, with a mechanism.

Many of these ideas are equally applicable to string theory or LQG, since they're dealing with practical problems.

Tell me if you would have dismissed Feynman's diagrams in 1948 as crackpot, like Oppenheimer did at first.

Best wishes,

Nigel
Nigel Cook | Homepage | 02.06.06 - 3:25 am |

Claims that we have a good idea that many combination’s of ‘laws’ won’t allow life: these are disproved by cosmologists claiming that the successful calculation of fusion of light elements in the big bang demonstrates G didn’t vary by more than 20% from 3 minutes to today. Actually, it could have varied enormously and without affecting fusion in stars or indeed the big bang, simply by the constancy of the ratio of Coulomb to gravity. If G was a million times weaker at 3 minutes, compression and fusion rate would be less. But if the ratio of gravity to Coulomb was constant, the weaker Coulomb repulsion between nuclei would make fusion more likely, offsetting the reduced compression.

Conclusion: the widely held claims that the anthropic principle can dismiss many different combinations of laws has been abused. It is a false claim that the antropic principle is useful, it is arguing from plain ignorance, which is the kind of philosophising that supposedly went out of fashion with the scientific revolution.

Spacetime says distance is light speed multiplied by the time in the past that the event occurred. So the recession of galaxies is varying with time, in the framework of spacetime that we can actually see and experience with measurements. A speed varying linearly with time is acceleration Hc = 10^-10 ms^-2, hence outward force of big bang is mass of universe. By the 3rd law of motion, you then get an inward force, the gauge boson exchange force, which causes the general relativity contraction and also gravity.

Kepler was an astrologer; since it's just calculating where the planets are in the sky it doesn't matter how you do the calculations. Nobody says that because Kepler speculated the planets orbited the sun due to magnetism, or because he was an astrologer, he should be banned from the history of science for what he got right (the planetary laws the alchemist Newton used).

Another guy of like-mind was Maxwell, who had various elastic and mechanical aethers in his papers of 1862-5. However, his equations are compatible with the FitzGerald-Lorentz contraction of 1989-93 or the same formulae from 'special'/restricted relativity.

I read a 1933 book, 'My Philosophy' by Sir Oliver Lodge, a physicist who pioneered radio. It's a mix of physics and pseudoscience, just like the modern stringy stuff. Lodge was clinging on to Maxwell's ether, and trying to popularise it with telepathy, etc. Very sad.

Then he quoted a chunk of Einstein's 1920 lecture 'Ether and Relativity' which pointed out that GR deals with absolute motion like acceleration and is compatible with an ether that behaves like a perfect flowing fluid.... which is sensible in view of quantum mechanical vacuum.

If the vacuum is a fluid virtual particles, a particle of matter moving in it is going to be compressed slightly in the direction of motion, gain inertial mass from the fluid (like a boat moving in water), and create waves. Wonder why the film-makers prefer metaphysical entanglement, when aetherial entanglement is for sale dirt cheap?

In 1995, physicist Professor Paul Davies - who won the Templeton Prize for religion (I think it was $1,000,000), wrote on pp54-57 of his book About Time:

‘Whenever I read dissenting views of time, I cannot help thinking of Herbert Dingle... who wrote ... Relativity for All, published in 1922. He became Professor ... at University College London... In his later years, Dingle began seriously to doubt Einstein's concept ... Dingle ... wrote papers for journals pointing out Einstein’s errors and had them rejected ... In October 1971, J.C. Hafele [used atomic clocks to defend Einstein] ... You can't get much closer to Dingle's ‘everyday’ language than that.’

G. Builder (1958) is an article called 'ETHER AND RELATIVITY' in Australian Journal of Physics, v11, 1958, p279, which states:

‘... we conclude that the relative retardation of clocks... does indeed compel us to recognise the CAUSAL SIGNIFICANCE OF ABSOLUTE velocities.’

Einstein himself slipped up in one paper when he wrote that a clock at the earth’s equator, because of the earth’s spin, runs more slowly than one at the pole. One argument, see

http://www.physicstoday.org/vol-58/iss-9/p12.html, is that the reason why special relativity fails is that gravitational ‘blueshift’ given by general relativity cancels out the time dilation: ‘The gravitational blueshift of a clock on the equator precisely cancels the time dilation associated with its motion.’

It is true that general relativity is involved here, see the proof below of the general relativity gravity effect from the Lorentz transformation using Einstein’s equivalence principle. The problem is that there are absolute velocities, and special relativity by itself gives the wrong answers! You need general relativity, which introduces absolute motion, because it deals with acceleration like rotation, and observers can detect rotation as a net force, if in a sealed box that is rotating. It is not subject to the principle of relativity, which does not apply to accelerations. Other Einstein innovations were also confused:

‘You sometimes speak of gravity as essential & inherent to matter; pray do not ascribe that notion to me, for ye cause of gravity is what I do not pretend to know, & therefore would take more time to consider of it… That gravity should be innate inherent & essential to matter so yt one body may act upon another at a distance through a vacuum wthout the mediation of any thing else by & through wch their action or force may be conveyed from one to another is to me so great an absurdity ...’ – Sir Isaac Newton, Letter to Richard Bentley, 1693. ‘But if, meanwhile, someone explains gravity along with all its laws by the action of some subtle matter, and shows that the motion of planets and comets will not be disturbed by this matter, I shall be far from objecting.’ – Sir Isaac Newton, Letter to

The two curl ‘Maxwell’ (Heaviside) equations are unified by the Heaviside vector E = cB, where E is electric field strength and B is magnetic field strength, and all three vectors E, c, and B are orthagonal, so the curl vector (difference in gradients in perpendicular directions) can be applied simply to this unique E=cB:

curl.E = c.curl.B

curl.B = (1/c).curl.E

Now, because any field gradient or difference between gradients (curl) is related to the rate of change of the field by the speed of motion of the field (eg, dB/dt = -c dB/dr, where t is time and r is distance), we can replace a curl by the product of the reciprocal of -c and the rate of field change:

‘... we have made only one step in the theory of the action of the medium. We have supposed it to be in a state of stress, but we have not in any way accounted for this stress, or explained how it is maintained...’

In Article 111, he admits further confusion and ignorance:

‘I have not been able to make the next step, namely, to account by mechanical considerations for these stresses in the dielectric [spacetime fabric]... When induction is transmitted through a dielectric, there is in the first place a displacement of electricity in the direction of the induction...’

First, Maxwell admits he doesn’t know what he’s talking about in the context of ‘displacement current’. Second, he talks more! Now Feynman has something about this in his lectures about light and EM, where he says idler wheels and gear cogs are replaced by equations. So let’s check out Maxwell's equations.

One source is A.F. Chalmers’ article, ‘Maxwell and the Displacement Current’ (Physics Education, vol. 10, 1975, pp. 45-9). Chalmers states that Orwell’s novel 1984 helps to illustrate how the tale was fabricated:

‘… history was constantly rewritten in such a way that it invariably appeared consistent with the reigning ideology.’

Maxwell tried to fix his original calculation deliberately in order to obtain the anticipated value for the speed of light, proven by Part 3 of his paper, On Physical Lines of Force (January 1862), as Chalmers explains:

‘Maxwell’s derivation contains an error, due to a faulty application of elasticity theory. If this error is corrected, we find that Maxwell’s model in fact yields a velocity of propagation in the electromagnetic medium which is a factor of root 2 smaller than the velocity of light.’

It took three years for Maxwell to finally force-fit his ‘displacement current’ theory to take the form which allows it to give the already-known speed of light without the 41% error. Chalmers noted: ‘the change was not explicitly acknowledged by Maxwell.’ Weber, not Maxwell, was the first to notice that, by dimensional analysis (which Maxwell popularised), 1/(square root of product of magnetic force permeability and electric force permittivity) = light speed.

Maxwell’s innovation was: Total current = electric current + displacement current. But he didn’t understand what the terms were physically! Really atoms are capacitors themselves, not solids as Maxwell thought in 1873 (X-rays and radioactivity only confirmed the nuclear atom in 1912). So the light speed mechanism of electricity is associated with ‘displacement current’ and electric current results from the electric field induced by ‘displacement current’.

In March 2005, Electronics World carried a longish letter from me pointing out that the error in the Heaviside/Catt model of electricity is the neglect of the energy flowing in the direction of displacement current. We know energy flows between the conductors from Feynman’s correct heuristic interpretation of Dirac’s quantum electrodynamics. Gauge bosons, photons, are exchanged to cause forces, and we know that energy flows ‘through’ a charging/discharging capacitor, appearing on the opposite side of the circuit. Catt/Heaviside proclaim, nothing (including energy) flows from one plate to the other, which is false, like their ignorance of electrons in the conductors.

A radio transmitter aerial and receiver aerial form a capacitor arrangement:

__________||__________

Catt is right at http://www.ivorcatt.com/2604.htm to point out that Maxwell ignored the flow of light speed energy along the plate connected to a charge. He is wrong to ignore my statement to him, based on Feynman's heuristic quantum mechanics and my fairly deep mechanistic knowledge of radio from experimenting with it myself instead of reading equations and theories from armchair experts in books (I read the books after experimenting, and found a lot of ignorance).

Transmitter and receiver aerial in a more usual situation (receiver picking up a much weaker than transmitted field): |….. |

Hence, a radio link is a capacitor, with radio waves the ‘displacement current’. This is the simplest theory which fits the experimental facts of radio! It was Prevost in 1792 who discovered that if a cooling object is also receiving energy in equilibrium, you don’t measure a temperature fall.

Charges radiate continuous displacement current energy and receive energy, there is equilibrium. Where equilibrium doesn't occur, you have forces resulting, potential energy changes, and so on. Displacement current as Maxwell formulated it only occurs while ‘charging/discharging’. In any case, it is not the flow of real charge, only energy. The electromagnetic field of displacement current is really energy, and this is what propagates through space, causing the long range fundamental forces. It’s easier to write articles or books, or make wormhole movies, than to explain work tied to the facts! You don’t see many popular books about the standard model or quantum loop gravity, and nothing on TV or in the cinema about it (unlike adventures in wormholes, parallel universes, and backward time travel). The myth is that any correct theory will be either built on string theory mathematics, or will be obviously correct. To give an off-topic example, since Maxwell’s aether was swept away (rightly in many ways, as his details were bogus), nobody has tried to explain what his ‘displacement current’ energy flow is. It is energy flowing from one parallel conductor to another across a vacuum in a charging/discharging capacitor, just like radio waves. If so, then spinning (accelerating) charges are exchanging non-oscillatory ‘displacement current’ with one another all the time, as the gauge boson of electromagnetism.

4. Fact based predictions and comparison with experimental observations

‘String/M-theory’ of mainstream physics is falsely labelled a theory because it has no dynamics and makes no testable predictions, it is abject speculation, unlike tested theories like General Relativity or the Standard Model which predicts nuclear reaction rates and unifies fundamental forces other than gravity. ‘String theory’ is more accurately called ‘STUMPED’, STringy, Untestable M-theory ‘Predictions’, Extra-Dimensional. Because these ‘string theorists’ suppressed the work below within seconds of it being posted to arXiv.org in 2002 (without even reading the abstract), we should perhaps politely call them the acronym of ‘very important lofty experts’, or even the acronym of ‘science changing university mavericks’. There are far worse names for these people.

HOW STRING THEORY SUPPRESSES REALITY USING PARANOIA ABOUT ‘CRACKPOT’ ALTERNATIVES TO MAINSTREAM

‘Fascism is not a doctrinal creed; it is a way of behaving towards your fellow man. What, then, are the tell-tale hallmarks of this horrible attitude? Paranoid control-freakery; an obsessional hatred of any criticism or contradiction; the lust to character-assassinate anyone even suspected of it; a compulsion to control or at least manipulate the media ... the majority of the rank and file prefer to face the wall while the jack-booted gentlemen ride by. ... But I do not believe the innate decency of the British people has gone. Asleep, sedated, conned, duped, gulled, deceived, but not abandoned.’ – Frederick Forsyth, Daily Express, 7 Oct. 05, p. 11.

‘The creative period passed away … The past became sacred, and all that it had produced, good and bad, was reverenced alike. This kind of idolatry invariably springs up in that interval of languor and reaction which succeeds an epoch of production. In the mind-history of every land there is a time when slavish imitation is inculcated as a duty, and novelty regarded as a crime… The result will easily be guessed. Egypt stood still… Conventionality was admired, then enforced. The development of the mind was arrested; it was forbidden to do any new thing.’ – W.W. Reade, The Martyrdom of Man, 1872, c1, War.

‘Whatever ceases to ascend, fails to preserve itself and enters upon its inevitable path of decay. It decays … by reason of the failure of the new forms to fertilise the perceptive achievements which constitute its past history.’ – Alfred North Whitehead, F.R.S., Sc.D., Religion in the Making, Cambridge University Press, 1927, p. 144.

‘What they now care about, as physicists, is (a) mastery of the mathematical formalism, i.e., of the instrument, and (b) its applications; and they care for nothing else.’ – Sir Karl R. Popper, Conjectures and Refutations, R.K.P., 1969, p100.

‘... the view of the status of quantum mechanics which Bohr and Heisenberg defended - was, quite simply, that quantum mechanics was the last, the final, the never-to-be-surpassed revolution in physics ... physics has reached the end of the road.’ – Sir Karl Popper, Quantum Theory and the Schism in Physics, Rowman and Littlefield, NJ, 1982, p6.

‘… the Heisenberg formulae can be most naturally interpreted as statistical scatter relations, as I proposed [in the 1934 book ‘The Logic of Scientific Discovery’]. … There is, therefore, no reason whatever to accept either Heisenberg’s or Bohr’s subjectivist interpretation …’ – Sir Karl R. Popper, Objective Knowledge, Oxford University Press, 1979, p. 303. (Note statistical scatter gives the energy form of Heisenberg’s equation, since the vacuum is full of gauge bosons carrying momentum like light, and exerting vast pressure; this gives the foam vacuum.)

The inverse-square law of gravity which formerly had to be derived empirically by Newton’s method using Kepler’s observed laws of planetary motion (hence no understanding), plus uniquely the correct universal gravitational constant G, accurate to within 1.65% for Physical Review Letters reported data, i.e., a prediction of 10 ms-2 at the earth’s surface compared to a measured 9.8 ms-2. Newton never estimated this constant, it was later worked out by Laplace by fiddling the equation to fit observations, not by a proof based on the mechanism of gravitation. This is completely unique: F = ¾ mM H2/(

p r2r e3) » 6.7 x 10-11 mM/ r2 Newtons! Also the masses of observable fundamental particles from the Higgs mechanism via the polarised vacuum (top section of webpage); quark masses are not ‘real’ in the sense that you can never isolate a quark to measure its mass (the energy to isolate one exceeds that to create new quarks).

This shielded force is equal to the shadow from an area equal to a black hole; a fundamental particle of mass M has a radius of 2GM/c2 and a cross-sectional space pressure shielding area of

p (2GM/c2)2. [This is obtained by setting the results equal for the two calculation methods: the first simple methods using Newton’s 3rd law to calculate inward force which leaves shield area as an unknown, and the second approach shown in full above which uses logic to obtain G = ¾ H2/(pr e3).] Hence, ‘static’ matter is proved to be composed of light-type energy trapped by its own gravity into black hole. It has been proved that scattering-type interactions between the waves around different moving electrons in an atom would prevent neat orbits and cause the chaotic orbits described statistically by Schroedinger’s quantum mechanics of the atom, and the lack of determinism. (Causality is a different matter!)

There is little ‘dark matter’ around because the false ‘critical density’ in general relativity is out by a factor e3/2 = 10.

The falsity of ‘dark energy’ from the ‘acceleration’ of the universe implied from supernovae red-shift: since gravity is a response to the surrounding matter, distant galaxies in the explosion are not slowed down by gravity, so there is no need to claim there is an acceleration offsetting a fictitious gravity pull-back. This eliminates the need for ‘dark energy’. The galaxies are simply cruising along in the aftermath of the big bang, but because we see them at progressively earlier times with increasing distance, we see an apparent variation of speed with distance (or rather time past). Gravity is caused by the surrounding expansion at vast distances from the observer. The universe does have a limit in size so at gr