Site Statistics

Becoming before being (Prigogine)

+ 0 like- 0 dislike

315 views

I am trying to wrap my head around Prigogine's suggestion that thermodynamics could be "more fundamental" than particle and field dynamics.

I am unable to visualize that. I (like everyone I guess) can only visualize fields driving particles, particles radiating fields, particles colliding, particles emitting/absorbing particles, or mass/energy shaping space-time geometry and being driven by it - the microscopic interactions that we use to consider as fundamental and build thermodynamics upon. I can't visualize the other way around.

So can anybody help me to "see" Prigogine's ideas?

The closest I get is "downward causation" - the idea that the whole of a system can influence its parts in ways that cannot be reduced to the local interactions between the parts. Interestingly, downward causation is found in Aristotle and medieval thinkers but has then fallen out of fashion, surely as a result of the spectacular success of Newton's physics. Do we need more flexible concepts of causation?

"The classical order was particles first, the second law later - being before becoming! It is possible that this is no longer so when we come to the level of elementary particles and that here we must first introduce the second law before being able to define the entities. Does this mean becoming before being? Certainly this would be a radical departure from the classical way of thought. But, after all, an elementary particle, contrary to its name, is not an object that is "given"; we must construct it, and in this construction it is not unlikely that becoming, the participation of the particles in the evolution of the physical world, may play an essential role." (Prigogine, "From Being to Becoming").

"The main thesis of this book, which can be formulated as follows: First, irreversible processes are as real as reversible ones; they do not correspond to supplementary approximations that we of necessity superpose upon time-reversible laws. Second, irreversible processes play a fundamental constructive role in the physical world; they are at the bases of important coherent processes that appear with particular clarity on the biological level. Third, irreversibility is deeply rooted in dynamics. One may say that irreversibility starts where the basic concepts of classical or quantum mechanics (such as trajectories or wave functions) cease to be observables. Irreversibility corresponds not to some supplementary approximation introduced into the laws of dynamics but to an embedding of dynamics within a vaster formalism." (Prigogine, "From Being to Becoming").

"[We] see everywhere in nature change that is irreversible, and organization, and life itself, born out of irreversible processes. It makes one wonder: is mechanics a convenient approximation of natural processes that are fundamentally irreversible and not the converse, as the current dogma holds?" (Kondepudi and Prigogine, "Modern Thermodynamics: From Heat Engines to Dissipative Structures").

Without formulas making the words precise, it is vague philosophy only. But the idea seems to be that fields satisfying an irreversible dynamics (''becoming'') are proposed as the fundamental objects, from which particles (''being'') emerge. His proposals had little followup research only, and should today be regarded as historical blueprints of a vision that did not materialize.

Arnold, vague philosophy can be, and often is, a starting point for more precise physics. Prigogine tried to formulate his ideas more precisely with formulas, but without really hitting his point (as far as I can judge). As you say his "becoming before being" program had little followup. Perhaps because it was too different from the current framework. However, I would add a magic word to your comment: "a vision that did not materialize yet."

What do you think of downward causation, emergence and all that? Can these vague ideas find a place in physics?

Emergence is not downward causation but collective behavior following from the more detailed model. Prigogine's underlying theory would also have to be more microscopic if the traditional point of view is to emerge from it as an approximation. But ti is unlikely to happen - Prigogine never indicated how elementary particles could emerge. Instead he based his more concrete ideas on deformed Hilbert spaces allowing for resonances, as in Bohm & colleagues papers (which constitutes the most conspicuous part of the followup work done). But resonances give an incomplete description of a dissipative system since the decay products are no longer modeled. So I think the formal core of Prigogine's vision cannot be made to work.

@Arnold re "Emergence is not downward causation but collective behavior following from the more detailed model."

This can be called "weak emergence." Some philosophers of science talk also of "strong emergence," where collective behavior does _not_ follow from a detailed microscopic model.

Depending on whether deriving the collective behavior from a detailed microscopic model is difficult in-practice or impossible in-principle, I guess we could talk of "weak-strong emergence" and "strong-strong emergence."

What I just called strong-strong emergence seems to me part of what Prigogine had in mind. He adds the idea that it's the collective behavior, taken as fundamental, that determines the detailed microscopic behavior (in ways that I am totally unable to see).

(in ways that I am totally unable to see).- I think this is a misunderstanding. I had looked at his math, and of course (in the models he talks about) everything is determined by a detailed model. It is just not conservative and has the arrow of time built in. (If you want to ping me you need to say @ArnoldNeumaier)

Your comment on this question:

To answer, leave an answer instead. Comments are usually for non-answers.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
To alert a user, please use the "@" command and remove spaces from the username, example, the user "John Doe" should be pinged as "@JohnDoe", while the user "Johndoe" should be pinged as "@Johndoe". The post author is always automatically pinged (unless you are the post author).
Please consult the FAQ for as to how to format your post.

Live preview (may slow down editor)Preview

Live Preview

Preview

Your name to display (optional):

Email me at this address if a comment is added after mine:Email me if a comment is added after mine

Privacy: Your email address will only be used for sending these notifications.

1 Answer

+ 3 like- 0 dislike

One way to "see" Prigogine's ideas, in some sense to give specific content to @ArnoldNeumaier's comment, is to add one extra dimension to a field model, something akin to a renormalization scale $\mu$, say. One could then write equations that relate the field at different scales to each other, such as, say,
$$\frac{\partial^2\phi(x,\mu)}{\partial x^\alpha\partial x_\alpha}+(\mu-\mu_0)^2\phi(x,\mu)+\phi(x,\mu_0)=0.$$
This particular equation makes the scale $\mu_0$ special, in that what happens at the scale $\mu_0$ asymmetrically affects what happens at both larger and smaller scales. Obviously such a model requires interaction strengths to be chosen and $(\mu-\mu_0)^2$ could be any function of the scaling dimension; also dependency on differentials w.r.t. $\mu$ could be introduced without affecting Lorentz invariance; some kind of singular behavior at specific scales might be introduced. It would certainly have to be asked how this might be quantized, which as an effectively 5-dimensional field theory would surely be problematic, albeit the symmetry is still Lorentzian 3+1-dimensional.

In this type of model, Prigogine's specific idea, that the higher scale DoFs are subject to thermodynamics, would come from the impossibility of measuring the field $\phi(x,\mu)$ at all scales $\mu$ (even supposing that we could measure $\phi(x,\mu_0)$ for all $x$ at some scale $\mu_0$). Thermodynamics would be a consequence of averaging over DoFs at all scales below $\mu_0$, say.

There must surely be other ways to formalize Prigogine's ideas; I would make no claim that this is the only way to do the job or that such an approach might be capable of being empirically better than standard Lorentzian QFT. Adding an extra dimension in this way can only be more general than a 4-dimensional field, however the effects of the extra dimension could be engineered to be as small as necessary. The introduction of the extra dimension makes the field at every scale independent of the field at every other scale, but the coupling to the field at larger and smaller scales can be arbitrarily weak or strong, with arbitrarily chosen functional behavior (so that the independence can be arbitrarily constrained, giving the appearance that there is no independence for different scales).

Really outlandish behavior could be introduced with this kind of model; Planck's constant and the metric could change very slowly as the scale changes, for example, so that at scale $\log_{10}(\mu/meter) = -100 \ll -34$, say, phenomena could be much different than at scales above the Planck length, but making detailed contact with experiment for such a model wouldn't be easy!

EDIT: A Poincaré-invariant free field quantization, giving a Gaussian field, seems not too difficult: we can define the 2-point VEV of the vacuum state, in momentum space, as
$$\langle 0|\tilde\phi(k,\mu)\tilde\phi(k',\mu')|0\rangle=\sum\limits_\alpha(2\pi)^4\delta^4(k-k')2\pi\delta\bigl(k^2-m^2(\alpha)\bigr)\theta(k_0)M_\alpha(\mu,\mu'),$$ where $M_\alpha(\mu,\mu')$ is an $\alpha$-indexed set of positive semi-definite matrices. $(2\pi)^4\delta^4(k-k')$ is required to ensure translation invariance and $2\pi\delta\bigl(k^2-m^2(\alpha)\bigr)\theta(k_0)$ restricts to the forward light-cone, with an $\alpha$-dependent mass, so that $\bigl[\hat\phi(x,\mu),\hat\phi(x',\mu')\bigr]=0$ when $x-x'$ is space-like, for all $\mu,\mu'$, but there can also be non-trivial vacuum state correlations between scales $\mu$ and $\mu'$.

Of course one could say that $\mu$ is just one more dimension, not necessarily "scale", but I suppose there to be interpretations that would take "collective behavior [to follow not only] from the more detailed model" (modeled on the phrasing of @ArnoldNeumaier's third comment on your question), but also from additional information that is encoded in $\hat\phi(x,\mu)$ for different values of $\mu$. I am, of course, waving my arms a little here.

@Peter, I like the idea of scale-dependent physics and often think it could be useful in quantum gravity and things like that.

Thinking aloud and waving my arms a lot: perhaps space could be modeled as a scale-dependent (not scale-invariant) fractal with a scale-dependent fractal dimension (per dimension) 1 + P/L, where L is the observation scale and P is the Planck length. Or something like that. For scale-dependent fractals see for example:

"In this type of model, Prigogine's specific idea, that the higher scale DoFs are subject to thermodynamics, would come from the impossibility of measuring the field $\phi(x,\mu)$ at all scales μ (even supposing that we could measure $\phi(x,\mu_0)$ for all x at some scale $\mu_0$). Thermodynamics would be a consequence of averaging over DoFs at all scales below $\mu_0$, say."

How can the behavior of the average influence the behavior of what is averaged?

Could you clarify?

Note to self, I must learn how to type maths here, I used to be familiar with TeX but that was long ago. [I've TeXed your comment; if you edit it, you'll be able to see that TeX works quite straightforwardly on physicsoverflow.][Thanks!]

@GiulioPrisco Your Question set me off down a road I have thought about in the past but haven't been able to make as precise as I'd like. I've been thinking overnight about whether the models I've put out here can be made more precise; I think a useful question is to ask what renormalization scale "means". For a renormalizable theory we take $\lim_{\mu\rightarrow\infty}$, in a relatively sophisticated way, more-or-less eliminating the renormalization scale; for a non-renormalizable theory we cannot, so the renormalization scale is significant. Perhaps the difference is that the ultraviolet DoFs of a non-renormalizable theory are essentially not controllable, indeed that something like a fractal structure emerges that cannot be modeled by a quantum field, in which case we have to consider different theories for different renormalization scales, including the contributions of progressively larger mass fields as the renormalization scale decreases. That would seem to make $\hat\phi(x,\mu)$ for different $\mu$ correspond to different partial encodings of a "fractal" structure that is not encodable by a quantum field $\hat\phi(x)$, in which case we could consider a dynamics in which we take $\hat\phi(x,\mu)$ for different $\mu$ to interact with each other, as a different kind of approximate dynamics. Of course one could try higher-dimensional encodings $\hat\phi(x,\mu,\nu, ...)$.

To answer your comment's question specifically, "How can the behavior of the average influence the behavior of what is averaged?", I suppose my account in this comment suggests that $\hat\phi(x,\mu)$ is not necessarily as determined as an average over $\hat\phi(x,\mu')$ for $\mu>\mu'$; if it was, we could just use $\hat\phi(x)$, but for a non-renormalizable theory we can't. I suppose I would take $\hat\phi(x,\mu)$ for large $\mu$ to interact with $\hat\phi(x,\mu')$ for small $\mu'$, rather than "influence" it. As far as thermodynamics is concerned, I take it we use thermodynamics whenever we have incomplete information about a dynamical system, either about the dynamics or about the initial conditions, and more-or-less random fluctuations contribute to the effective dynamics of the information we do have.

Although I use an analogy to random fractals, more so than I might because you are using that language, I think we're working with a mathematics that is itself and has to be interpreted on its own terms. I prefer to think of quantum field as a stochastic signal processing formalism, whereas fractals are a largely deterministic structure.

Your comment on this answer:

To answer, leave an answer instead. Comments are usually for non-answers.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
To alert a user, please use the "@" command and remove spaces from the username, example, the user "John Doe" should be pinged as "@JohnDoe", while the user "Johndoe" should be pinged as "@Johndoe". The post author is always automatically pinged (unless you are the post author).
Please consult the FAQ for as to how to format your post.

Live preview (may slow down editor)Preview

Live Preview

Preview

Your name to display (optional):

Email me at this address if a comment is added after mine:Email me if a comment is added after mine

Privacy: Your email address will only be used for sending these notifications.

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.

Live preview (may slow down editor)Preview

Live Preview

Preview

Your name to display (optional):

Email me at this address if my answer is selected or commented on:Email me if my answer is selected or commented on

Privacy: Your email address will only be used for sending these notifications.

Anti-spam verification:

If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOverfl$\varnothing$wThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).