The test of a model depends on what it can predict, though this is not the only consideration : models which stimulate the mind because they are ‘intuitively clear’ have proved to be extremely helpful in the development of science even if they were eventually discarded.
Anyone wishing to lay the foundations for a new science on the basis of preliminary assumptions must steer a narrow course between two extremes. On the one hand, he must beware of wrenching unjustifiable conclusions from the premises because he ‘knows exactly where he wants to land up’. On the other hand, there is no point in threshing around in the dark and hoping for the best : once it is clear that one line of argument is leading nowhere, it must be abandoned. How do we know it is leading nowhere ? Often we don’t, but we can appeal to our own or others’ experience to judge how we are progressing. For example, de Sitter’s model derived from Einstein’s Equations of General Relativity was clearly wrong (or rather not applicable to the case that concerned us) since it predicted a universe completely empty of matter. In other cases, early scientists were eventually proved ‘right’ (though not necessarily for the reasons they believed at the time), for example Huyghens’ wave theory of light.

I shall attempt to avoid these two extremes. My sketchy knowledge of advanced physics and current experimentation (HLC and so on) could actually be an advantage in the sense that I am by no means sure ‘where I want to land up’, so I am less likely to fudge things. As for the second danger, a manifestly absurd conclusion will (hopefully) prompt me to re-examine my original assumptions and add to them, and, if this does not work, simply admit that something has gone wrong. But at this stage in the game it would be unfair to expect, and even foolish to desire, anything but the broadest qualitative predictions : being too specific in one’s forecasts too early can all too easily block off diverging avenues worth exploring.

Before drawing any conclusions, I will briefly review in an informal manner, the preliminary assumptions on which the whole of Ultimate Event Theory is based. Broadly speaking, In a nutshell, I consider that “the universe is composed of events rather than things”. Although I have listed some properties of ‘events’ as I see them, at this stage I have to assume that the notion of an ‘event’, or at any rate the distinction between an event and a ‘thing’, is ‘intuitively clear’. Ultimate events associate together to form ‘ordinary’ events but cannot themselves be further decomposed — which is why thet are called ‘ultimate’. They occupy ‘spots’ on the ‘Locality’ — the latter beihng, for the moment, nothing more than a sufficiently large expanse able to accommodate as many ultimate events as we are likely to need. Ultimate events are ‘brief’ : they flash into and out of existence lasting for the space of a single ‘chronon’, the smallest temporal interval that can exist, in this ‘universe’ at any rate. A definite ‘gap’ exists between successive appearance of ultimate events : physical reality is discontinuous through and through (Note 1). Some ultimate events acquire what I call ‘dominance’ : this enables them to repeat identically, perhaps associate with other stabilized ultimate event and influence event clusters. ‘Objects’, a category that includes molecules and some elementary particles (but perhaps not quarks) — are viewed as relatively persistent, dense event-clusters. Dominance is not conserved on the grand scale : there will always be some ultimate events that pass out of existence for ever, while there are also ultimate events which come into existence otherwise than by a causal process (random events) (Note 1).

The predictions are as follows:

(1) There will always be an irreducible background ‘flicker’ because of the discontinuity of physical reality. This ‘rate of existence’ varies : essentially it depends on how many positions on the Locality are ‘missed out’ in a particular event-chain. The rate of most event chains is so rapid that it is virtually imperceptible — though, judging by certain passages in the writings of Plato and J-J Rousseau, some people seem to have thought they perceived it. But there should be some ‘extended’ event chains whose flicker can be recognized by the instruments we now have, or will shortly develop (Note 2).

(2) The current search for ‘elementary particles’ will turn up a very large quantity of heterogeneous ‘traces’ which are too brief and too rare to be dignified with the title of ‘elementary particle’. The reason for this is the vast majority of ultimate events do not repeat at all : they flash into existence and disappear for ever.

(3) The number of ‘elementary particles’ detected by colliders and other instruments will increase though some will never be detected again : this is so because ultimate events are perpetually forming themselves into clusters but also ‘breaking up’ into their component parts, in some cases dematerializing completely.

(4) Certain ‘elementary particles’ will pass clean through solid matter without leaving a trace : this will tend to occur every time the (relative) speed of a very small event cluster is very large while and the ‘thickness’ of the lumped cluster is small in the direction of travel (Note 3).

(5) There will always be completely new event-clusters and macroscopic events, so the future of the universe is not completely determinate. This is so because not all ultimate events are brought into existence by previously existing ones : some ultimate events originate not in K1 (roughly what is known as the physical universe) but in K01 , the source of all events. If these ‘uncaused events’ persist, i.e. acquire self-dominance, or come to dominate existing event clusters, something completely new will have come into existence — though whether it persists depends on how well it can co-exist with already well-established event-clusters. In brief, there is an irreducible random element built into the universe which stops it being fully determinate.

Notes :

(1) Since putting up this post on January 18th, I have come across what might be confirmation (od a sort) of this prediction. The February 2012 edition of Scientific American includes a mind-blowing article, Is Space Digital? by Michael Moyer. “Craig Hogan, director of the Fermilab Center ….thinks that if we were to peer down at the tiniest subdivisions of space and time, we would find a universe filled with an intrinsic jitter, the busy hum of statuc. This hum comes not from particles bouncing in and out of being or other kinds of quantum froth that physicists have argued about in the past. Rather Hogan’s noise would come about if space was not, as we have long assumed, smooth and continuous, a glassy backdrop to the dance of fields and particles. Hogan’s noise arises if space is made of chunks. Blocks. Bits.” This is not just a passing thought, for Hogan “has devised an experiment to explore the buzzing at the universe’s most fundamental scales.”I originally thought that what I call the ‘flicker of existence’ would remain forever beyond the reach of our instrumentation and said as much in the original draft of this post. However, after thinking about the amazing advances made already, I added, perhaps prophetically, “There should be some ‘extended’ event chains whose flicker can be recognized by the instruments we now have, or will shortly develop.” Maybe Hogan’s is one of them.
However, I do not ‘buy’ the current trend of envisaging the universe as a super computer — for Hogan my ‘flicker of exietence’ is ‘digital noise’. The analogy universe/computer strikes me as being too obviously rooted in what is becoming the dominant human activity — computing. I wouold have thought the ‘universe’ had better things to do than just process information. Like what for example ? Like bringing something new into existence from out of itself, actualizing what is potential. In a nutshell : the ‘universe’ (not a term I would choose) is creative not computational. But I suppose one cannot expect trained scientists to see things in this light. S.H. (7/2/12)

(2) Heidegger put it more poetically, “Being is shot through with nothingness”.

(3) This happens because a rapid event cluster ‘misses out’ more event locations on its path, so the chance of the two clusters ‘colliding’, i.e. ‘competing’ for the same spots on the Locality, is drastically reduced.

(4) This is so because not all ultimate events are brought into existence by previously existing ones : some ultimate events originate not in K1 (roughly what is known as the physical universe) but in K01 , the source of all events. If these ‘uncaused events’ persist, i.e. acquire self-dominance, or come to dominate existing event clusters, something completely new will have come into existence — though whether it persists depends on how well it can co-exist with already well-established event-clusters. In brief, there is an irreducible random element built into the universe. (This is quite apart from Quantum Indeterminacy which in any case would disappear if a coherent ‘hidden variables’ theory replaces the orthodox one.)

I will discuss in a subsequent post whether modern experiment and observation gives any support to these predictions.