Identifying causal networks is important for effective policy and management recommendations on climate, epidemiology, financial regulation, and much else. We introduce a method, based on nonlinear state space reconstruction, that can distinguish causality from correlation. It extends to nonseparable weakly connected dynamic systems (cases not covered by the current Granger causality paradigm). The approach is illustrated both by simple models (where, in contrast to the real world, we know the underlying equations/relations and so can check the validity of our method) and by application to real ecological systems, including the controversial sardine-anchovy-temperature problem.

Until recently, scientists had a limited toolbox for detecting causation. Often they would correlate two variables and suggest it implied causation. Yet as long ago as 1710, Irish philosopher Bishop Berkeley remarked “correlation does not imply causation.”

George Sugihara of Scripps Institution of Oceanography at UC San Diego and colleagues from around the world have developed a new approach to help ecologists distinguish true causal interactions from misleading correlations.

Published in the most recent online issue of the journal Science, the method described in the paper, “Detecting Causality in Complex Ecosystems,” extracts the “signature” left by causes embedded in ecological observations—-historical records known as time series. The new mathematical approach deduces causes from the affects.

“The major novelty of this method is that it is based on a dynamic and interconnected view of nature,” said Sugihara, the McQuown Chair Distinguished Professor of Natural Science at Scripps. “Ice cream sales and rates of violent crime might rise and fall at the same time, but our method is able to determine whether this is due to cause-and-effect, or whether both are simply more common during hot summer months.”

The new tool is distinct from methods developed by UC San Diego economists Clive Granger and Robert Engle for financial and economic data, which earned them the Nobel Prize in Economic Sciences. Granger’s technique is aimed at purely random systems rather than those having rules governing how the parts move. Sugihara and his colleagues developed their tool specifically for complex ecosystem analysis, yet its applications could have far-ranging implications across multiple areas of science. For example, “one could imagine using it with epidemiological data to see if different diseases interact with each other or have environmental causes,” said Sugihara.

Excerpts from the paper’s Summary:

Apparent relationships among variables can switch spontaneously in nonlinear systems as a result of mirage correlations or a threshold change in regime, and correlation can lead to incorrect and contradictory hypotheses. Growing recognition of the prevalence and importance of nonlinear behavior calls for a better criterion for evaluating causation where experimental manipulation is not possible.

In resource management, as elsewhere, accurate knowledge of the causal network can be essential for avoiding unforeseen consequences of regulatory actions.

JC comment: About 15 years ago, I was very intrigued by possible applications of Granger causality to the climate feedback problem. My naive attempts at applying GC to climate time series were inconclusive, it is now more clear to me why this didn’t work. This new method looks very exciting to me, particularly since it has demonstrated application to climate related problems. I would appreciate hearing from those of you who are knowledgable about statistics and data analysis as to your take on this method and how we might apply this more broadly to climate impact and attribution analyses.

Back to sardines

Back to the original press release on sardines. The press release refers to the following paper:

Abstract. For many marine species and habitats, climate change and overfishing present a double threat. To manage marine resources effectively, it is necessary to adapt management to changes in the physical environment. Simple relationships between environmental conditions and fish abundance have long been used in both fisheries and fishery management. In many cases, however, physical, biological, and human variables feed back on each other. For these systems, associations between variables can change as the system evolves in time. This can obscure relationships between population dynamics and environmental variability, undermining our ability to forecast changes in populations tied to physical pro-cesses. Here we present a methodology for identifying physical forcing variables based on nonlinear forecasting and show how the method provides a predictive understanding of the influence of physical forcing on Pacific sardine.

Published online before print March 27, 2013, doi:10.1073/pnas.1215506110PNAS March 27, 2013 [link to full manuscript]

Excerpts from the press release:

In the early 1940s, California fishermen hauled in a historic bounty of sardine at a time that set the backdrop for John Steinbeck’s “Cannery Row” novel. But by the end of the decade the nets came up empty and the fishery collapsed. Where did they all go? According to a new study led by scientists at Scripps Institution of Oceanography at UC San Diego, the forces behind the sardine mystery are a dynamic and interconnected moving target.

What is the impact of climate on sardines? What is the effect of overfishing on sardines? Focusing on single variables in isolation can lead to misguided conclusions, the researchers say.

“Studying ecosystems in this piecemeal way makes it hard to find quantitative relationships, the kind that are useful for management and stand the test of time,” said Deyle.

Instead, using novel mathematical methods developed last year at Scripps, the researchers argue that climate, human actions, and ecosystem fluctuations combine to influence sardine and other species populations, and therefore such factors should not be evaluated independently.

For example, based on data from the Scripps Institution of Oceanography Pier, studies in the 1990s showed that higher temperatures are beneficial for sardine production. By 2010 new studies proved that the temperature correlation was instead a misleading, or “mirage,” determination.

“Mirages are associations among variables that spontaneously come and go or even switch sign, positive or negative,” said Sugihara. “Ecosystems are particularly perverse on this issue. The problem is that this kind of system is prone to producing mirages and conceptual sand traps, continually causing us to rethink relationships we thought we understood.”

By contrast, convergent cross mapping avoids the mirage issue by seeking evidence from dynamic linkages between factors, rather than one-to-one statistical correlations.

“Sustainable sardine fishing based on ecosystem-based management should adapt to dynamic changes in the ocean environment, and future policies should incorporate these effects to avoid another ‘cannery row,'” said Deyle.

From the Discussion:

Scenario exploration with multivariate embeddings motivates the development of new adaptive management schemes based on short-term forecasting. If time series of a human control variable (e.g., fishing effort or mortality) are available, management could be based on scenario exploration with multivariate embeddings that include abundance, temperature, and the control variable. Simultaneously exploring different temperature and harvesting scenarios could then reveal how temperature affects the relationship between fishing and future biomass. This type of scenario exploration would tell managers the permissible level of fishing given a target biomass and recent ocean conditions.

More immediately, these methods offer a data-motivated perspective on the dynamic relationship between sardine and environmental conditions in the CCE. We find that including either SIO SST or the PDO in the multivariate embedding improves forecast skill (ρ) by ∼30%. These indicators reflect environmental forcing of sardine population dynamics that is not captured by more traditional methods. We conclude that environmental considerations should in fact play an important role in Pacific sardine management.

Our results further suggest that good management must reflect the complexity underlying the interaction between environmental changes and population dynamics. . . This corroborates the previous results used in the management plan that warm water is better for sardine. However, our empirical analysis shows that the effect of temperature depends on the specific state of the population. For example, changes in temperature seem to have little or opposite effect in the early years of the time series, when the population is at a much lower abundance, and in later years with very large abundance (e.g., 1999–2001). This suggests any temperature-sensitive control rule for sardine should be different at low, intermediate, and high sardine abundances.

With regards to natural modes of climate variability:

Several other variables show positive ΔF, including the Pacific Decadal Oscillation (PDO), North Pacific Gyre Oscillation (NPGO), and the Southern California Bight (SCB) satellite SST, suggesting these are also relevant to Pacific sardine population dynamics. Of these, only the PDO is significant (P < 0.05). The PDO and SIO SST are highly correlated, so this is not surprising. The method suggests three variables that are least likely important to sardine dynamics: Newport Pier SST, North Pacific Index (NPI), and Southern Oscillation Index (SOI); each of these has a strong negative effect on forecasting.

American Fisheries Society

The American Fisheries Society is a professional society whose mission is to improve the conservation and sustainability of fishery resources and aquatic ecosystems by advancing fisheries and aquatic science and promoting the development of fisheries professionals. They publish Transactions of the American Fisheries Society and the North American Journal of Fisheries Management.

The report examines how scientists and nonscientists perceive science, what factors affect the quality and use of science, and how changing technology influences the availability of science. Because the issues surrounding the definition of best available science surface when managers and policymakers interpret and use science, this report also will consider the interface between science and policy and explore what scientists, policymakers, and managers should consider when implementing science through decision making.

This paper will

provide a practical description of the concept of best available science;

identify the limits to creating, distinguishing, and using the best available science; and

suggest ways to improve the use of science in policy and management.

To accomplish these objectives, we break the concept of best available science down into the cumulative components of science, best science, and best available science. Throughout the discussion, we highlight the factors that influence best available science, including (a) the changing nature of science, (b) the increasing role of uncertainty in scientific decision making, (c) the influence that the values and ethics of scientists have on the scientific process, (d) the changing availability of information via peer-reviewed journals, gray literature, expert opinion, and anecdotal evidence, and (e) the bridges that need to be forged and maintained between science, policy, and management.

This is an interesting essay and I encourage you to read it; it arguably deserves its own thread.

that the Society’s external advocacy will be ethically and professionally sound;

that advocacy will not degrade the Society’s reputation as the most reliable source of scientific information on aquatic resources;

that fisheries-related scientific information will be used appropriately when members address aquatic resource issues; and

that AFS advocacy positions will be widely supported within the Society because they will be technically correct, respectful of alternative views, and consistent with AFS policies and the Code of Practices. Members and subunits planning to influence an issue external to the Society and to invoke the credibility of the AFS or its members shall adhere to the Society’s policy.

The AFS has Policy Statement on Climate Change that relates to fisheries, it is fairly extensive (43 pages). It does make the statement: In the interest of sustaining marine fisheries and habitats, the American Fisheries Society encourages immediate reductions in greenhouse gas emissions and implementation of adaptation policies described above for fisheries communities and habitats. And The uncertainty of impacts of climate change on communities and habitats necessitates that work should be carried out within an adaptive management framework where evaluation of policies and management are strong components.

A recent letter to President Obama sent by the AFS on Jan 13, includes 9 recommendations that it states are derived from their policy analyses, the first one stating: Do not delay emissions reductions. Encourage reductions in anthropogenic sources of carbon dioxide and other greenhouse gases.

Further, the AFS has posted a survey for its members about Climate Change and Freshwater Fisheries, that includes an implicit assumption that anthropogenic climate change is generally a threat to fisheries.

JC questions: two points for discussion.

1) Particularly in light of the Scripps research, do you think that the AFS has a strong case that anthropogenic global warming is having an overall adverse impact on fisheries?

2) In the letter to President Obama, do you think the strongly prescriptive recommendation about limiting greenhouse gas emissions from the AFS (a group that does not have much expertise on the attribution of climate change or energy policy) helps or hinders their other recommendations, which are arguably closer to the purview of AFS expertise?

JC comments

I realize that this is a lot of material for a single post, but the Scripps press releases and the AFS letter landed in my in box at about the same time, and my brain rapidly connected them together.

I think that the two papers from Scripps provide a superb example of use-inspired research related to fisheries whereby a new causal inference model is developed that potentially has much broader applications.

The sardine paper provides a new paradigm for climate impact assessments for complex systems influenced by both physical and human actions. Lead authors of IPCC WGII, please take note. Apart from the new causal inference model, the method includes scenario generation strategies, integration with adaptive management approaches, and consideration of modes of natural climate variability.

Back to the AFS. In light of the Scripps papers, some rethinking of their premises and conclusions regarding the impact of anthropogenic (emissions induced) climate change on fisheries seems to be in order. But this raises a broader issue. Except for the area of climate change, the AFS seems to be quite careful and thorough in its policy assessments. Why has the AFS, and other renascent subfields related to climate change impacts, so wholeheartedly and unquestioningly adopted the assumption that anthropogenic climate change is having, or will have, adverse impacts, often showing more confidence than the IPCC in terms of attribution of the adverse impacts? A corollary assumption is that reducing greenhouse gas emissions will act to eliminate or reduce the adverse impacts. The combination of flawed causal reasoning (mirages), combined with the very substantial uncertainties in climate change and climate impact attribution, makes this a very risky management strategy.

In trying to understand the AFS policy recommendation on greenhouse gas emissions, it seems that the AFS strategy is to rely on the ‘best available science’ on climate change. The IPCC is arguably the best available assessment of climate science for purposes of decision making (at least the WG I Report; the AR4 WG II Report is probably not). I make this statement in spite of the many criticisms that I have made about the IPCC process and many of their arguments. The bigger issue is an assessment of the confidence we should place in the ‘best available science’ for purposes of decision making, in view of uncertainties, ambiguities, disagreement, and acknowledged areas of ignorance; it is in this regard that I think the IPCC falls quite short. The complexities and uncertainties in climate change attribution and future projections are magnified when the focus is regional impact assessments.

To clarify, we have two competing views on how science supports decision making:

‘Best available science’ is used to drive an optimal decision making model

Decision making under deep uncertainty focus on a broad range of possible future scenarios

With regards to the climate change problem, I have made it very clear in previous posts (e.g. see this previous post) why I regard the climate change issue as being characterized by ‘deep uncertainty’ for decision making purposes.

The AFS emphasis on adaptive management strategies for fisheries is a sensible one, but this seems inconsistent with their emphatic “Do not delay emissions reductions.” The U.S. emissions are now at 1994 levels owing to the natural gas boom, something that happened pretty much independently of any federal policy on emissions reductions. The scenario approach mentioned by the sardines paper and that has been discussed many times at Climate Etc. would produce much more robust fisheries management outcomes particularly at regional/local levels and on decadal time scales.

Moderation note: this is a technical thread and comments will be moderated for relevance.

JC asked “1) Particularly in light of the Scripps research, do you think that the AFS has a strong case that anthropogenic global warming is having an overall adverse impact on fisheries?”

As for climate, Judith, until we get a handle on natural variability of ocean currents and other aspects affecting krill numbers and the extent to which other human activity (like over fishing) is causing adverse impacts, this question seems moot.

One of your most provocative posts especially this question you pose ” Why has the AFS, and other renascent subfields related to climate change impacts, so wholeheartedly and unquestioningly adopted the assumption that anthropogenic climate change is having, or will have, adverse impacts, often showing more confidence than the IPCC in terms of attribution of the adverse impacts? ”
The world waits …..and wonders.

We may have to rethink the concept of causality in the context of nonlinear dynamics. Concepts are theory laden so they embody assumptions about the world which may be found to be false. The butterfly effect says a system may be deterministic yet intrinsically unpredictable. I suspect the concept of causality does not handle this distinction. In short large changes due to the butterfly effect literally have no knowable cause. Looking for a cause is then fruitless. Climate changes may be a case of this confusion.

Nonlinear dynamics is not about periodic cycles, rather it is about aperiodic oscillations. That is actually the mathematical definition of chaos: a stable aperiodic oscillation.

But the oscillations are intrinsically unpredictable due to the butterfly effect, which is a mathematical property. Here is the point. Explanation is usually retroactive prediction so if the event is intrinsically unpredictable it is also intrinsically unexplainable. Why a specific low becomes a hurricane for example, when others do not.

Climate changes may well fall into this category. This hypothesis may explain the failure to resolve the attribution issue after decades of research. But science has made little progress in accepting the implications of intrinsic unpredictability. We are stuck on the old concept of every change having an identifyable cause. That need not be true but I know of no one seeking to identify the unexplainable in climate change.

kim i don’t think you’ll find an past example of as much as 2C global warming in mere centuries within available climate records. The difference between the LIA and MWP was likely less than 1C, and that difference took place over 500 years.

“studies in the 1990s showed that higher temperatures are beneficial for sardine production. By 2010 new studies proved that the temperature correlation was instead a misleading, or “mirage,” determination.:”
_____

Add sardine production to the many things global warming won’t be good for.

Judith, you write “‘Best available science’ is used to drive an optimal decision making model .”

I have fought this issue for most of my career doing Operations Research. The question a lot of people refuse to address is “Is the best good enough?”. Just because we have the best science available does not mean that this is good enough to solve the problem at hand. Too often in my career have I heard people state, WWTE, this is the best available, so we can use it to make a sensible decision. This always was, still is, and always will be, wrong.

The question that matters is “Is the best science availbale good enough to solve the problem at hand?” If the answer to that question is no, then the problem cannot be solved. That is the conclusion that the original proposers of CAGW should have come to 30 or so years ago.

Jim Cripwell: The question a lot of people refuse to address is “Is the best good enough?”

I second the idea. The best must be assumed to be inadequate until it has been shown to be adequate. Historical studies show that most of the times, even the best scientists have been wrong until their works have been published, examined and corrected.

From the UAH series which starts in 1979 I have identified two pauses in average global temperature, the first covering a 17-year period (1979-95) and the second covering a 12-year period (2002-2013). As can be seen in the linked graph, the OLS line for each of these periods is flat.

Both pauses combined account for 29 years of the 35-year UAH global temperature record. Yet if we look at the entire record, we see a pronounced warming trend. The rise in temperature resulted largely from a surge in 1998, and didn’t retreat afterwards.

What caused the 1979-95 pause in temperature?

What caused the 2002-2013 pause in temperature?

What caused the 1998 surge in temperature? If it resulted from something cyclical, why hasn’t temperature returned to the 1997 level?

Will the current pause last as long as the 1979-95 pause, which would extend it out to 2018?

I have finished an amazing amount of work over the weekend – phew – the pressures off but my nerves are jangling from my drug of choice. Far too much coffee.

The 1982 dip for a couple of years was El Chichon and in 1991 it was Mt Pinatubo. So simply stringing a linear trend without a theory is very misleading. It is why I call it wood for dimwits.

The 2002 to 2013 pause is caused by a change in cloud in the 1998/2001 climate shift. It is the result of spontaneously emergent properties in the climate system. This is how complex and dynamic systems work in theoretical physics.

The 1998/2000 El Nino/La Nina combination was a dragon-king. There are four dragon-kings and they live under the sea in coral castles and are guarded by shrimp soldiers and crab generals.

‘We develop the concept of “dragon-kings” corresponding to meaningful outliers, which are found to coexist with power laws in the distributions of event sizes under a broad range of conditions in a large variety of systems. These dragon-kings reveal the existence of mechanisms of self-organization that are not apparent otherwise from the distribution of their smaller siblings… We emphasize the importance of understanding dragon-kings as being often associated with a neighborhood of what can be called equivalently a phase transition, a bifurcation, a catastrophe (in the sense of Rene Thom), or a tipping point. The presence of a phase transition is crucial to learn how to diagnose in advance the symptoms associated with a coming dragon-king.’ http://arxiv.org/abs/0907.4290

The energy explanation is very simple.

d(W&H) = power in – power out

Where W&H is work and heat – and power is in Joules/s.

The difference between power in and power out determines the rate of warming or cooling. Power in changes very little. Unfortunately both CERES and ARGO missed the important 1998/2001 transition – but both show moderate warming in the last decade. It was all in the short wave showing that clouds again dominate the energy balance. The slow and fast changes in this balance determines what the temperature of the planet is.

Climate shifts are related to fisheries in that the cool PDO and La Nina both involve upwelling of vast amounts of very cold and nutrient rich bottom water in the eastern Pacific resulting in booms and bust of fisheries, coastal bird life and marine mammals.

One alternates over 20 to 40 years and the other alternates over a few years – but with shifting intensity and frequency.

‘This collapse wasn’t just because of fishing pressure – scientists now recognize that there was also a change in oceanic cycles, which resulted in an extended period of below-normal water temperatures. Sardines are generally more abundant during a warm water regime, so the colder water greatly influenced the decline in sardine abundance. The demise of the Pacific sardine fishery has become a textbook example of the boom-and-bust cycles characteristic of small pelagic fish and fisheries. By the late 1980s, the sardine stock began to recover as water temperatures increased and harvests had been limited. Sardine fisheries were slowly reestablished. Today, this species and fishery are thriving once again under active, science-based management, and conservative catch quotas.’ NOAA fishwatch

But the absolute abundance in anchovies in cold, nutrient rich water are the real boom species – Americans should learn to love anchovies. Salmon have returned to North American streams in abundances not seen for decades. Humpback whales have been chasing anchovies in Monterey Bay and the seal pups are fat. The boom times have returned with the nutrients upwelling form the abyssal depths of the ocean.

The cool and richly abundant modes last 20 to 40 years – from 2002 – in the proxy records.

We have heat in the form of photons hitting the Pacific around the equator, this evaporates water, makes dense, warm brine’s, and makes lots of organic carbon from inorganic carbon.
Further North, we have less heat coming from the sun, but we have air that was at the equator a few weeks ago, we have sensible heat from the precipitation of equatorial water vapor, warm current that were warmed eight years previously at the equator.
The equator is cooler than one would think from the incoming light flux and the North warmer.

Now at steady state incoming and outgoing fluxes are, over time, equal. However, no one is measuring incoming and outgoing fluxes with the fidelity needed to know if the fluxes balance.
We do know that there is a huge amount of heat in the system that spat out all that heat as temperature between 1997 and 2000.

Yes I do Doc – it is the simplest and most useful idea in climate science. Isn’t that always the way.

We are interested in changes – which is why it was expressed as a differential.

How do we know the planet is warming? We have measurements. We have the satellite tropospheric record. We have ARGO. While the temperature has been a little nondescript – the ocean seem to be warming in ARGO with a warming trend of 0.55 +/- 0.1 W/m^2.

Where can that warming come from? Not from the sun – we know that changes very little. Remembering that it is the change over a period we are interested in and not the absolute value.

CERES is said to stable to 0.5W/m^2 over a decade. It means that the instrument drift is less than that – which is all that we are interested in in terms of change. The problem with absolutes is the problem of calibration – we have nothing good to compare it to.

But the trend is your friend. We can close out the energy budget and have a better idea of which is causing what.

Chief. Solar output is quoted at being stable at +/- 0.1% in terms of total power. However, the devil is in the details as the solar spectrum is far more variable, with uv and IR output changing by +/-6%.
Now the penetratrance of photons at 240 nm and 960 nm are quite different. Change the output spectrum and different bits of the planet get heat and different amounts of the solar spectrum get reflected off into space.
Back of the envelope stuff is great, but when you are digging in the dirt, looking at signals that are the same size as noise or missing rhythms know to exist, then it is time to do something else.

Solar UV is a different matter – suspected of causing top down climate modulation from interactions of solar UV/ozone in the stratosphere. This drives I believe the major variability in ocean and atmospheric indices.

Background definitions for causation in dynamic systems
Here we present a
formal exposition of the criterion
for causality
presented in this work.
The ideas are built from the theory of time
-delay embedding first noted by Crutchfield
(35),andlater proven by Takens
(19),then subsequently generalized
(20,36).Consider a dynamic process
φ defining the temporal evolution of points in an E
-dimensional state space,trajectories of which converge to
some d-dimensional (d≤E) manifold M such that φ M
M . That is, if m (t) is a point on M then m(t+1) = φ(m(t)).

Let X be an observation function of φ. X is commonly thought of as a
system variable , but more generally is a function (e.g. rotation or linear combination of the original E coordinates) that maps points in
M to a real – valued scalar (i.e.X:Mℝ). For each X there is a
corresponding time series, {X}={X(1),…,X(L)}, that tracks the
trajectory of points in M mapped to a sequence of real numbers , that is
X(t+1)=X(m(t)).The length of the time segment (library size) is L.
A lagged – coordinate embedding uses E time-lagged values of {X} as coordinate axes or dimensions to reconstruct a shadow attractor manifold M X(3,8,19,20,24,35,37-40). The points in this manifold, denoted by x(t), consist of the set of E-dimensional vectors:
x(t) = where the time lag τ is positive. Generically, points x(t) on MX map 1:1 to points m(t) on M(19,20)so that M X is a diffeomorphic reconstruction of the original attractor manifold
M(19,20). We note that there are special
cases (discussed below) where the mapping between manifolds is asymmetrical and not 1:1. The validity of an embedding can be confirmed if the manifold M X constructed from historical time series values (a subset of {X}) can skillfully forecast … .

I have truncated much, but the last line excerpted gives the game away. They have yet another mathematical definition of “causality”, in which the “true” model is a proper subset of a model that makes adequately accurate forecasts. This definition does not allow a conclusion that a proposed intervention will actually produce a desired effect (called “affect” in the lead-in to the thread.) In his book Causality, Judea Pearl addresses this limitation of causal modeling by saying that “causal” modeling identifies variables that might be important for manipulation (might be the basis of useful interventions) and provides reasonable estimates of how much of an effect a proposed intervention is likely to have.

The authors establish a useful theorem: that Granger Causality is incompatible with Takens’ Theorem. Put differently, a sufficiently complex nonlinear system can not be modeled by Granger Causality. I think a bunch of us suspected that, and a demonstration is useful.

Taken altogether (not “Takens” altogether), this is an important paper, imho, that will take a while to absorb.

It appears from your presentation that you disliked the Forest et al procedure because it produced a posterior distribution that was discordant from your personal prior, and you liked your procedure because the formal prior you used produced a posterior prior that was closer to your personal prior. As an exercise in showing how posterior distributions depend on prior distributions that’s nice; but you have provided no substance for claiming that your credible interval is better than the credible interval than the credible interval that you critiqued.

Interesting comment from you over at WUWT on the Nic Lewis thread. It’s nice to see someone with an honest-to-god skeptical streak.

In a book titled “A Comparison of Bayesian and Frequentist Methods of Point Estimation”, Prof Francisco J. Samaniego argues that in order to improve upon maximum likelihood estimation the Bayesian’s prior must be at least sufficiently accurate (i.e., with the median close to the true value.) Non-informative priors never do that. The best that you can achieve with non-informative priors is to have larger intervals with the same posterior mean as the mle, and that does not have a high probability of occurring.

All Nic Lewis did was show that different prior distributions produce different posterior distributions.

As i wrote in another thread. Nic used the same data to determine the prior and to apply it as prior. That’s effectively squaring the data.

His approach is to argue that little weight should be given to the region where the data cannot differentiate between different parameter values. Why should we think that the power of the data in differentiating parameter values should give an appropriate prior for the parameter values. Sometimes it’s true that this situation arises when the range of parameter values is inflated, but there’s absolutely no general reason to trust to such an assumption, as Nic does in his method.

1) Particularly in light of the Scripps research, do you think that the AFS has a strong case that anthropogenic global warming is having an overall adverse impact on fisheries?

No. It has no evidence of that at all, and the causal change is quite long. First, you have to show that the warming is anthropogenic. Then you have to show how it affects the ocean.

Then you have to show that that effect is negative.

Not one of these has been done.

2) In the letter to President Obama, do you think the strongly prescriptive recommendation about limiting greenhouse gas emissions from the AFS (a group that does not have much expertise on the attribution of climate change or energy policy) helps or hinders their other recommendations, which are arguably closer to the purview of AFS expertise?

It would hinder their other recommendations if they were not terminally ludicrous already. They don’t have a clue what they’re talking about, because even if there were a problem, REDUCING EMISSIONS KILLS THE POOR. For them to discuss reduced emissions without mentioning that makes them … well, it makes them about as clueless as most climate scientists.

Overall? I’d rate their statement somewhere between “babboons on typewriters” and “cosmic joke”. They should stick to their fishing. There’s enough idiots in climate science already, no need to add them.

Can we please get back to science, and away from some fishermen’s tall tales about the climate?

No need to shout Willis, but you definitely are right:
REDUCING EMISSIONS KILLS THE POOR
as a rough estimate, the food for fuel program(ethanol) has killed some 20 million poor people, many of them children, over the last 8 years it’s been in effect.

Willis Eschenbach: No. It has no evidence of that at all, and the causal change is quite long. First, you have to show that the warming is anthropogenic. Then you have to show how it affects the ocean.

Then you have to show that that effect is negative.

Not one of these has been done.

FWIW, I agree with Willis. All they have is another (admittedly plausible) mathematical model whose projection into the future has unknown accuracy.

By a casual glance at the tracks, no. But is the shear of wheel on rail strong enough to reorient the surface in some measurable way? I don’t know the answer, but with technology many things are possible. And it is plausable that one could indeed tell which way the train went. Sounds like there is a science fair project in there somewhere…. :-)

And then there’s the temperature gradient; the most recent part of the track would be expected to be warmer….

I am being playful here, of course, but isn’t this the essence of the post: Extracting real meaning, real causes, real data from a complex system?

Actually you can tell which way the train went. The passage heats the rails. They’ll be warmer in the direction the train was headed having less time to cool than the rails in the direction from whence it came.

Around here they run trains both ways on the same track, sometimes within minutes of each other(big assembly yard is nearby). Temperature might tell you something, but to really know you need the schedule(the reality underlying the observations). Reminds of some other processes we know about.

This is about devils sitting on the head of a pin. No amount of mathematical sophistry can prove physical causality which can only be demonstrated inductively.
Causality can be disproved however, by examining time lags between variables: an effect can never precede a cause. This was done for global temperature and CO2 recently by Humlum, Stordahl and Solheim (Global and Planetary Change 2012). After filtering out low frequencies, atmospheric CO2 concentration changes were shown to lag temperature changes on a time scale of years to decades. They demonstrate unequivocally that the former cannot be the cause of the latter.
The fact that this paper appears to have had little effect on the ongoing climate “debate” indicates that belief in human induced climate change is entirely a matter of faith. No amount of evidence or rational argument is going to affect these True Believers. Hence I believe it is time for real scientists, those who respect the scientific method, to stop arguing and start distancing themselves from the idealogues.
Once the lay public start to realize that evidence based science delivers the goods and faith based science does not, funding will surely follow. It may take a while.

“When scientists talk about climate, they’re looking at averages of precipitation, temperature, humidity, sunshine, wind velocity, phenomena such as fog, frost, and hail storms, and other measures of the weather that occur over a long period in a particular place.”

So the title of the paper “Predicting climate effects on Pacific sardine” can be validly read as “Predicting the effects of averages of precipitation, temperature, humidity, sunshine, wind velocity, phenomena such as fog, frost, and hail storms, and other measures of the weather that occur over a long period in a particular place, on Pacific sardine”.

How hard can it be?

“Climate”, like “heat” and “intelligence”, are examples of things everybody understands in ordinary conversation. Problems arise as they are “scientified”.

Now a little substitution – “Changes in averages of precipitation, temperature, humidity, sunshine, wind velocity, phenomena such as fog, frost, and hail storms, and other measures of the weather that occur over a long period in a particular place, predicted to cause more extreme weather events.”

Really? Am I expected to believe the average of a physical thing will cause that thing to be different in the future?

I don’t take much notice of astrology, but it’s been around longer than climatology, so it obviously has stood the test of time over thousands of years. What use has all the money and effort poured into climatology been?

Nothing? Great climatological breakthroughs? Towering intellects in the field? None that I am aware of.

It seems the future remains safe from the best efforts of the worshippers to peer behind the veil and see what is to come.

Except for the area of climate change, the AFS seems to be quite careful and thorough in its policy assessments. Why has the AFS, and other renascent subfields related to climate change impacts, so wholeheartedly and unquestioningly adopted the assumption that anthropogenic climate change is having, or will have, adverse impacts, often showing more confidence than the IPCC in terms of attribution of the adverse impacts?

Answer: For the same reason as Royal Society, NAS and virtually all the science bodies that ‘represent’ their members.

Our grandfathers who grew up on the West coast of California knew of sardines in the oceans and didn’t question it. Most of them fought in WWII. And, there sons and daughters grew up on the West coast of California they know only anchovies. None of probably would argue it had something to do with WWII. Years later, the children of the sons and daughters knew nothing about anchovies–sardines had returned.

Why did this happen? Many of us would argue the cause is human CO2. And, that is the real question: why not WWII before buy maybe human CO2 now?

Why has the AFS, and other renascent subfields related to climate change impacts, so wholeheartedly and unquestioningly adopted the assumption that anthropogenic climate change is having, or will have, adverse impacts, often showing more confidence than the IPCC in terms of attribution of the adverse impacts? ….. The combination of flawed causal reasoning (mirages), combined with the very substantial uncertainties in climate change and climate impact attribution, makes this a very risky management strategy.

Define “Risk” in this sense, Judith.
Do the decision tree from the point of view of an objective AFS.

As I see it, the Upside (that the correlation is viewed as causation) then AFS paints themselves as victim, potentially benifiting from government largess and comparitive advantage.
The downside (that correlation is viewed as mirage) is “never-mind… someoneelse’s mistake.” But no gain or loss.
The only branch with a loss is if there are real AGW effect and you don’t say anything!

Now do the decision tree from the point of view of politically active AFS. Same tree with a potentially higher probability and payoff on the “viewed as causation) branch.

It seems to me that the proof that the mathematical equations used by Sugihara et al indeed are able to demonstrate causal relationships between particular events or phenomena still requires the passage of time. Their equations are simply mathematical models that they have tested in hind-casting. As with any model, this is not proof that they have skill in forecasting.

In this fact situation I would question any mathematical approach that wouldn’t at a minimum start with even odds for a return of the anchovies, in as much as we don’t really understand why they left, or why the sardines left before that — but we know they did leave and we know that we don’t know if sardines replacing the anchovies and vice versa are related but we do know that’s what happened. Probably the best we can do is know we don’t know much.

We do know why the PDO+ENSO is so significant in the tale of the sardine. It is the cold and nutrient rich upwelling. One state of the PDO favours sardine regimes and the other anchovies. Anchovies may be eaten raw as an aphrodisiac – or on a pizza. Either way they are delicious and nutritious.

‘Sardines flourish in the half of the cycle when waters are warmer than average; Chavez dubs this period the ‘sardine regime’. The half with cooler waters, when anchovies (Engraulis ringens) become the dominant small fish in the Pacific, he names the ‘anchovy regime’.

During the sardine regime, waters are warmer off the coast of California and Peru and along the Equator, but cooler near Hawaii and off the coast of Japan. The air is also warmer and carbon dioxide levels are higher.

The pattern is exactly the opposite during the anchovy regime. Then the cooler waters off California and Peru also support more salmon, rockfish, seabirds and plankton.

You were inviting statistical guidance re causality analysis, and my comment is not that. But I hope the following recollections will help add to the motivation for seeking such guidance.

I can still recall being brought up short by John Green (J.S.A. Green) stirring up some thinking in a group of student looking at isobars and computing geostrophic winds by asking us whether the winds were causing the pressure patterns, and not the other way round. That was in the spirit of Frank Ludlam’s department, where it took place in the mid 1970s. A note by Malcolm Walker in the newsletter of the Royal Meteorological Society’s History Group (Newsletter 2, 2012, pps 6-7) recalls something similar. Here is an extract:

‘We have been told the wet weather this year has been caused by the jet stream being farther south than usual. Is the word ‘caused’ correct? And what is meant by ‘the jet stream’?

When I was a research student in the Meteorology Department of Imperial College in the 1960s, my fellow students and I were told by staff, notably Frank Ludlam, my supervisor, that it was unwise in meteorology to say that A caused B. It was safer, they advised, to say that A was ‘consistent’ with B.

Ludlam was especially keen to make this point in connection with mid-latitude jet streams, saying that their existence was no more than ‘consistent’ with the existence and development of depressions.

The jet streams of middle latitudes, he pointed out, result from conversion of potential of kinetic energy within large-scale slope convection flows from the subtropics to higher latitudes. Depression formation, he said, ‘accompanied’ that conversion. With J.S.A.Green and J.F.R.McIlveen, he set this out very clearly in a classic and now sadly overlooked paper published in 1966 in the Quarterly Journal of the RMS (Vol. 92, pages 210 to 219).’

“The book considers relationships between climate changes and fish productivity of oceanic ecosystems. Long-term time series of various climatic indices, dynamics of phyto- and zooplankton and variation of commercial fish populations in the most productive oceanic areas are analyzed. Comparison of climate index fluctuations and populations of major commercial species for the last 1500 years indicates on a coherent
character of climate fluctuations and fish production dynamics. A simple stochastic model is suggested that makes it possible to predict trends of basic climatic indices and populations of some commercial fish species for several decades ahead. The approach based on the cyclic character of both climate and marine biota changes makes it possible to improve harvesting of commercial fish stocks depending on a phase (ascending or descending) of the long-term cycle of the fish population. In addition, this approach is helpful for making decisions on long-term investments in fishing fleet, enterprises, installations, etc. The results obtained also elucidate the old
discussion: which factor is more influential on the long-term fluctuations of major commercial stocks, climate or commercial fisheries?”

You were inviting statistical guidance re causality analysis, and my comment is not that. But I hope the following recollections will help add to the motivation for seeking such guidance.

I can still recall being brought up short by John Green (J.S.A. Green) stirring up some thinking in a group of students looking at isobars and computing geostrophic winds by asking us whether the winds were causing the pressure patterns, and not the other way round. That was in the spirit of Frank Ludlam’s department, where it took place in the mid 1970s. A note by Malcolm Walker in the newsletter of the Royal Meteorological Society’s History Group (Newsletter 2, 2012, pps 6-7) recalls something similar. Here is an extract:

‘We have been told the wet weather this year has been caused by the jet stream being farther south than usual. Is the word ‘caused’ correct? And what is meant by ‘the jet stream’?

When I was a research student in the Meteorology Department of Imperial College in the 1960s, my fellow students and I were told by staff, notably Frank Ludlam, my supervisor, that it was unwise in meteorology to say that A caused B. It was safer, they advised, to say that A was ‘consistent’ with B.

Ludlam was especially keen to make this point in connection with mid-latitude jet streams, saying that their existence was no more than ‘consistent’ with the existence and development of depressions.

The jet streams of middle latitudes, he pointed out, result from conversion of potential of kinetic energy within large-scale slope convection flows from the subtropics to higher latitudes. Depression formation, he said, ‘accompanied’ that conversion. With J.S.A.Green and J.F.R.McIlveen, he set this out very clearly in a classic and now sadly overlooked paper published in 1966 in the Quarterly Journal of the RMS (Vol. 92, pages 210 to 219).’

As can be seen from the picture it dates to 1336. Pilchard is another name for sardine. It disappears and returns according to the climate and was present in britain during the mwp and disappeared in the lia.there was quite a pilchard industry in south Devon in the warmer centuries and such places as ‘ pilchard point’ where lookouts searched the ocean for the fish can be visited to this day

If anyone can find an original link to my article I would be grateful.

Tony, I’m sure I’m not telling you anything new, but I am frequently reminded of the lovely spot in DeFoe when he describes coal seams meeting the sea, where salt is made, to preserve the blooms of herring. Worth looking up on your whole tour if you’ve not seen it before.
===========

I wonder if they consider the role of salmon hatcheries when considering feeder fish populations. Such fish now outnumber wild salmon and the population of hatchery fish has built on to the existing and growing population of wild salmon. In any event pacific sardines are making a fine comeback in a well understood cycle of population variability. Plankton are not doing so well.

Less about fish and more about the mathematics. Start from the premise that both fisheries and weather/climate are non-linear dynamic systems that have multiple variables. More than two suffices for the argument.
Granger causation asserts that if, with some lag, x statistically reliably precedes y ( since causes must precede effects) then in a sense x can be said to cause y because it predicts y. This is expanded to x and y ‘causing’ z using F tests, but is murkier. But this is not true causality, since some other ‘causative variable’ can be lag correlated to all. In general correlation, no matter how fancy or lagged, can never establish true causation.

What this paper does is a variation on the idea without the statistics, simply using ‘state space’. That is a rather old data mining idea, and a number of algorithms exist. We used to use them at Mot semiconductor to ‘find’ causes of quality problems in the fabs. Often, even worked. Increased X associated with decreased Yield, so decrease x and increase yield. But note the subtle shift to a linear non- dynamic system.
In any true non- linear dynamic system, sensitive dependence on initial conditions (the butterfly effect) means you can never sufficiently specify the state space to make accurate inferences for interesting periods of time into the future. While it is true that the state of all other variables will determine nearby outcomes, that is not true for the long term. So while the method may have some interest of weather forecasting, it is not going to be very helpful for long term climate.
Nor did the method produce convincing evidence concerning long term oscillations in sardines and anchovies except to reproduce what was previously known about those oscillations.

Yes, predicting something is not the same as causing it. Inferring causality from predictive relationships always requires a number of assumed, untested (maintained) hypotheses.

Hume identified the universal maintained hypothesis: “The future will resemble the past.” Even experimental results rely on that one; there is no logical reason why any physical law need be stable, but of course that’s how we all bet.

More indirect methods of inferring causality require more, and more elaborate, untested (maintained) hypotheses. Confirmatory statistical inference always assumes that the real world is nested within, or encompassed by, some broader but well-specified theoretical model. That the real world is a special case of the encompassing theoretical framework always remains a maintained hypothesis. Were this maintained hypothesis false, then causal inferences drawn under it would be invalid. The simplest way in which this could happen would be via an omitted variable, but false confidence in the inter-temporal or inter-regime stability of relationships among variables could also do the job. So could an incorrect understanding of what patterns a given causal data-generating process would generate.

What does ‘causality’ mean in a complex, non-linear system that can cycle between numerous similar states in both periodic and aperiodic fashion? Currently earth’s climate has been shown to cycle between long periods of gaciation, interspersed somehat regularly by short periods of warmer temperatures. Some speculate the current climate was caused by the Isthmus of Panama closing, re-routeing the ocean circulation. That might reasonably be seen as a cause. Small fluctuations in air temperature during a short, unstable interglacial period are not a good candidate as a cause or effect of anything.

1) Particularly in light of the Scripps research, do you think that the AFS has a strong case that anthropogenic global warming is having an overall adverse impact on fisheries?

Since you ask, Prof. Curry, I think they do. I haven’t actually read the entire policy paper, but I read the first few pages and scanned through the rest. Their position is analogous to why ancient marginal farmers were so conservative in their agricultural processes: any change was very probably going to be for the worse.

Reading through their paper, we see a strong emphasis on change, replacement of one species with another. Even when the overall productivity in terms of harvestable biomass increases, changes to species or even average size can make major investments in equipment, processes, and training useless. Or at least degrade their usefulness pending substantial and expensive changes.

I totally agree that their reliance on obsolete scientific methods used by other people interferes with the value of their position on CO2. However, my own opinion is that while increasing CO2 is likely to have an unpredictable effect on climate, it will also have an unpredictable effect on the complex marine ecosystems that fisheries depend on. That effect, IMO, represents a much larger and more important risk than Greenhouse-mediated climate change.

Every “species” of life in the ocean (and on land) is adapted to the levels of CO2 that have prevailed for the last 10 million years or so. Even when they appear identical to fossil species, the complex interaction of internal and external signals and other enzymes is fully adapted to current levels. Rising CO2 levels bid fair to extend far beyond the range to which any of these life forms are adapted. Changes to genetically determined behavior will be “random”, depending on accidents of prior mutations involved in adaptation to current conditions.

Every one of those “random” changes has the potential to produce a “butterfly” effect in the complex ecosystems involved. Given the number of such behavioral changes, the chance that ecosystems will go through major sudden changes is very high in every case. The nature and extent of those changes will probably have a drastic effect on fisheries highly dependent on specialized investments in technology, process, and training.

I’m not proposing any sort of drastic reduction of fossil carbon-based energy here. Certainly investments in research and development of new technology would be far more productive. And perhaps rationalization of the intellectual property laws so as to provide stronger encouragement of private innovation, while more strongly discouraging sequestering of potentially helpful technology. There are many better approaches than forcing a rise in carbon prices.

Here’s how it works Kim. Every population in every ecosystem is constantly adapting. This process includes all sorts of changes to a single very complex,powerful,analog computer. For roughly the last 10 million years, every one of those adaptations as been made in an environment where the general atmospheric pCO2 was roughly equivalent to today’s.

As the world’s pCO2 climbs out of the range where these adaptations were made, quite a few parts of the system will start to work differently, in ways that are randomly different depending on how pCO2 was incorporated into the particular cellular reaction. It’s as though you started injecting a dozen different random psychoactive drugs into randomly chosen parts of a human (or other mammal) brain: overall reaction patterns will change in unpredictable and highly non-linear ways.

I’m not saying the ecosystems won’t adapt. Of course they will. But they’re probably going to go through a bunch of sudden large-scale species replacement episodes. Much more dramatic than the sort seen from small changes of water temp, etc. Consider the risk to fisheries: we don’t know whether the outcome of those changes will even include anything bigger than a fruit fly, at least in large numbers. That’s pretty extreme, however there’s also the problem of sudden population explosions among microbial organisms that produce poisons that (some) fish can tolerate and accumulate in their bodies, but humans can’t. All in addition to much greater levels of change to harvestable species and size.

Assuming there’s sufficient harvestable biomass, human civilization will adapt to the situation, but that adaptation may not include a viable fishing industry. And even if it does, it’s quite likely to render useless a great deal of highly leveraged investment.

The fishing industry has good reason to be concerned, although IMO more about ecological effects of increased pCO2 than greenhouse effect-moderated climate change.

Ak,
almost everything in the modern era has an adverse effect on fisheries. why pick on CO2? Look at Monterey sardines from the 1900 to the collapse in the 1950’s. Also the cod in the north atlantic or the salmon in northern california. Blame the factory trawlers and scrapping the surface of the bottom of the ocean with drag nets or miles long lines catching everything that eats bait. Lots of work in the fisheries management world and hard to isolate variables. Nice thing are the ocean preserves.
Scott

AK, when you state that
“Every “species” of life in the ocean (and on land) is adapted to the levels of CO2 that have prevailed for the last 10 million years or so. Even when they appear identical to fossil species, the complex interaction of internal and external signals and other enzymes is fully adapted to current levels.”

Do you have any idea what you are talking about? The total amount of CO2 in the atmosphere, 390 ppm, is about 1/46 the total buffered CO2 in the ocean.
Now the top of the ocean, where we have abundant primary producers is denuded of CO2 and CO2 buffers. That’s what photosynthesis does, it fixes CO2. The total carbon in the atmosphere is 800 billion tons of carbon, the aerobic, upper living, layer of the ocean has about 1000 billion tons of carbon. Every year 100% of all the carbon in the top 10m of the Oceans and seas is fixed. It is replaced by the diffusion from atmospheric CO2 and CO2/Carbonates from below. The fluctuations in CO2/Carbonates seen by living marine photosynthetic organisms are huge.
CO2 isn’t a problem biologically, for any species in the biosphere, until we have >1,000 ppm in the atmosphere or more.

Every year 100% of all the carbon in the top 10m of the Oceans and seas is fixed. It is replaced by the diffusion from atmospheric CO2 and CO2/Carbonates from below. The fluctuations in CO2/Carbonates seen by living marine photosynthetic organisms are huge.

So what is the replacement time for CO2 removed from the upper 10 meters, and is it measured in hours, days, or weeks? Can you point me to peer-reviewed research backing up your claim? What are the levels of fluctuation relative to the average? Are they 1%, 10%? Certainly not 100 %! Or even 30%. And you’ve missed the fact that a lot of the material produced is oxidized right back into CO2 by marine heterotrophs, as well as autotrophs at night.

Basically the balance between production and oxidation produces drawdown or excesss CO2 which is kept fluctuating around the atmospheric value (or one in equilibrium with it) through diffusion. When the atmospheric pCO2 is higher, the values will fluctuate around a higher point. Plankton species with reactions tuned to current values will be reacting to a different situation than they’re adapted for. Differences in those reactions will result in substantial changes to the ecosystem, the point I’m trying to make.

My understanding of marine ecosystems includes an expectation that the actual fluctuations are stabilized primarily by atmospheric pCO2, based on a number of peer-reviewed papers I’ve read or scanned. Since you claim otherwise, why don’t you provide some peer-reviewed references for your claims about fluctuations.

‘”This study is important for identifying the complexity of the ocean acidification problem around the globe,” said Scripps marine biologist Jennifer Smith. “Our data show such huge variability in seawater pH both within and across marine ecosystems making global predictions of the impacts of ocean acidification a big challenge. Some ecosystems such as coral reefs experience a daily range in pH that exceeds the predicted decrease in pH over the next century. While these data suggest that marine organisms may be more adapted to fluctuations in pH than previously thought much more research is needed to determine how individual species will respond over time.’ http://scrippsnews.ucsd.edu/Releases/?releaseID=1234

The HCO3- increase is balanced ultimately by dissolution of calcium. Calcium is supersaturated in sea water – and the discussion is that some forms may become les than saturated by the end of the century in the Southern Ocean. Although there are lots of sources of calcium in the oceans.

“Basically the balance between production and oxidation produces drawdown or excesss CO2 which is kept fluctuating around the atmospheric value (or one in equilibrium with it) through diffusion. ”

You surprise me. When you use the term ‘equilibrium’ it is obvious that you do not understand the system you are describing. One might as well say the oxygen in the atmosphere is in ‘equilibrium’ with sunshine.
If you cannot understand how concentration gradients are made and maintained by biological systems further discourse is pointless.

When you use the term ‘equilibrium’ it is obvious that you do not understand the system you are describing.

The fact that “equilibrium” is a risky word in studies of non-linear systems doesn’t mean any use of it is in error. I won’t claim complete understanding of the marine ecosystems, but then anybody making such claims is delusional. There’s a great deal that needs research before anybody can claim any real understanding of marine ecosystems.

When it comes to CO2 production, transport, and uptake in the upper ocean, the equilibrium value of pCO2 and carbonate and bicarbonate concentrations with the atmospheric pCO2 is very important, even if it’s never achieved. It is the difference (ratio actually, the difference of the logs) between the equilibrium and actual values that drives transport across the air/water interface.

I don’t have time for a complete discussion of the mechanics of productivity and the CO2 balance in marine ecosystems, and in any event it’s irrelevant to the point I’m making.

If you cannot understand how concentration gradients are made and maintained by biological systems further discourse is pointless.

This seems to me like a cop-out, since you clearly aren’t prepared to come up with peer-reviewed references to back your claims. A bit of searching on my part discovered Schulz and Riebesell (2012), from which I quote the following:

In seawater, diurnal fluctuations in pH are usually considerably smaller, ranging from 0.1 units in spring in the Bay of Calvi in the Mediterranean (Frankignoulle and Bouquegneau 1990) to 0.15 in autumn in the Bay of Bengal in the Indian Ocean (Subramanian and Mahadevan 1999) and up to 0.5 in a Kelp forest close to the Kerguelen Archipelago in the Southern Ocean in austral summer (Delille et al. 2009).

Similarly, seasonal variations differ from region to region with highest pH variability in low-buffered eutrophic systems such as lakes or the Baltic Sea with up to 3.2 and 0.7 pH units, respectively (Maberly 1996; Thomsen et al. 2010). Lowest seasonal variability is found in well-buffered oligotrophic open ocean waters with an average of about 0.022 at HOT, the Hawaii Ocean Time Series, and 0.055 pH units at ESTOC, the European Station for Time Series in the Ocean [adapted from Dore et al. (2009) and Gonza´lez-Da´vila and Santana-Casiano (2011), respectively].

The impact of air/sea gas exchange on water column DIC inventories, and hence changes in carbonate chemistry speciation, during the diurnal cycle was calculated to be small in comparison with corresponding effects of biological activity, ranging from about 1 to 4 % (see ’’Appendix‘‘).

Despite the fact that photosynthesis and respiration led to similar changes in DIC during daytime, ranging from 49.5 to 60.3 lmol kg-1 (compare Figs. 5) in all five CO2 treatments of this experiment, diurnal changes in pH were found to be related to actual in situ CO2 concentrations, being more pronounced at high than at low levels. Changes in free proton concentrations, [HF+] were even larger, being almost three times as high at the highest (675 latm) compared to the lowest (310 latm) CO2 treatment (compare Fig. 4). As stated above, this is the result of lower seawater buffering capacity (Revelle and Suess 1957), meaning that in a high CO2 world for the same amount of primary production, respiration or calcification, associated changes in seawater carbonate chemistry speciation such as pH and CO2 will be significantly amplified [also compare Frankignoulle (1994) and Egleston et al. (2010)]. {my bold}

I don’t have time for more research, and anyway I agree with you, albeit for the opposite reason: “further discourse is pointless.”

‘The effect of Ocean Acidification (OA) on marine biota is quasi-predictable at best. While perturbation studies, in the form of incubations under elevated pCO2, reveal sensitivities and responses of individual species, one missing link in the OA story results from a chronic lack of pH data specific to a given species’ natural habitat. Here, we present a compilation of continuous, high-resolution time series of upper ocean pH, collected using autonomous sensors, over a variety of ecosystems ranging from polar to tropical, open-ocean to coastal, kelp forest to coral reef. These observations reveal a continuum of month-long pH variability with standard deviations from 0.004 to 0.277 and ranges spanning 0.024 to 1.430 pH units. The nature of the observed variability was also highly site-dependent, with characteristic diel, semi-diurnal, and stochastic patterns of varying amplitudes. These biome-specific pH signatures disclose current levels of exposure to both high and low dissolved CO2, often demonstrating that resident organisms are already experiencing pH regimes that are not predicted until 2100.’

The HCO3- increase is balanced ultimately by dissolution of calcium. Calcium is supersaturated in sea water – and the discussion is that some forms may become less than saturated by the end of the century in the Southern Ocean. Although there are lots of sources of calcium in the oceans.

Actually, of course, calcium can’t be “supersaturated in sea water“, no ion can. What’s “supersaturated in sea water” is aragonite, a form of calcium carbonate.[Doney et al. (2009)] Pardon me for being picky, but given the way many people here pick up on short-hand phrases to indulge in ignorant dialectic, it’s important to be precise. I’m fully familiar with the chemistry of ocean acidification, as well as the issues of “adaptation” of marine organisms to increased pCO2.

Thank you for the references you provided. I’d already read Hofmann et al. (2011), but Giorgio and Duarte (2002) is useful. I’m still going over it, but on first inspection it appears to support my points. Ocean acidification may or may not be a problem, but the effects of increased pCO2 on marine ecosystems in general is (IMO) a greater risk. The “adaptation” referred to in both articles involves changes to the behavior of various populations, or more likely replacement, either by different species or different populations of the same species with (slightly) different behavior. These ecosystem adaptations will have unpredictable effects on the upper parts of the “food pyramid”, which could include replacement of large (i.e. harvestable) “species” with much smaller ones that occupy similar ecological niches. Even if this doesn’t happen, replacement of one large “species” by another could produce serious problems for fishing industries with highly leveraged investments in technology, process, and training.

I note that in both articles you referenced, large variations in pH or pCO2 are pretty much limited to localized environments with connections to nearby shores or shallow bottoms with active benthic communities. The vast majority of the world’s ocean is probably best represented by item A in Hofmann et al. (2011), which shows almost no variation in pH, and likely pCO2. Of course, much of the open ocean has very little biomass compared to more productive coastal and upwelling regions, but we can’t expect the differences to “cancel out”. Variability in such systems has actual meanings, in terms of external conditions (e.g. changes to patterns of upwelling or wind-borne dust), and local populations are probably adapted to changing their behavior in response to such variation. Assuming (as I do pending contrary research) that such variation is ultimately “tethered” to atmospheric pCO2, increases to that pCO2 will change the “meaning” of various levels involved in the variation, which means local life forms will exhibit behavior appropriate for lower levels. These changes will (potentially) interact with the behavior of other members of the eco-community, in unpredictable and non-linear ways.

I’ll admit I can’t provide references to back my opinion, as AFAIK the issue hasn’t been studied. That’s one reason I occasionally bring it up here, in case it would catch the attention of somebody better placed to perform such research.

The oceans surfaces are supersaturated with calcite and aragonite – I did says forms of calcium – and may become under saturated in the Southern Ocean by 2100.

The diurnal variations happen with respiration – in places where there is abundant life. This does include the oceans where currents and nutrients are present. Much of the tropical and subtropical Pacific in the cool ENSO+PDO mode, coastal zones, river discharges – nutrient recycling in open ocean photic zones is very efficient and plumes extend for 1000’s of kilometres in places. It is this that makes these areas sources of CO2 rather than sinks.

As an environmental scientist and a climate catastrophist in the sense of Rene Thom – risks of non-linear change would by my concern. But it seems possible to overstate the case.

The point remains that large pH fluctuations occur where with concentrations of life – and that the problem of under saturation of calcite and aragonite Is much longer term.

As an environmental scientist and a climate catastrophist in the sense of Rene Thom – risks of non-linear change would by my concern. But it seems possible to overstate the case.

You are, of course, entitled to your opinion. Mine is that the risk, while unquantifiable, is probably greater than from greenhouse mediated climate change. This is based on my own (admittedly amateur) study of the potentials for cellular intelligence and the nature of large comlex non-linear systems with very intelligent agents.

The point remains that large pH fluctuations occur where with concentrations of life – and that the problem of under saturation of calcite and aragonite Is much longer term.

Well, whichever way you mean “longer term” I can’t agree with you.

If you are referring to the mythical “centuries-long” lifetime of CO2 in the atmosphere, I had this to say here a while back:

With the right approach, IMO, we could start a process today that would probably result in the ability to draw down CO2 within a 5-10 year active time, using (bio-)technology that might mature within 20 years.

I put some numbers behind it here. The real problems are political and economic.

If you are referring to the notion that we have a long time before we have to worry, I would guess that’s based on the notion that aragonite has to become under-saturated before there’s a problem. That may be true, but not necessarily. Organisms that use active transporters might respond to reduced super-saturation with reduced shell deposition, which in turn could allow their corpses to float until the organic material has been oxidized, where before they sank quickly carrying both shell and organic carbon.

While it’s possible that sufficient research has been done to demonstrate that potential issue to be a non-problem, I doubt it. In fact, I’m not even sure the resources exist to perform that research yet. Certainly simplistic models analogous to those used to “prove” that the greenhouse effect will produce “global warming” could hardly be considered conclusive.

Why should I sing with the choir? I’m more of a “voice crying alone in the wilderness” type.

When lots of people with formal credentials are publishing papers about the effects of CO2 on the emergent behavior of eco-communities (if ever), I’ll be talking about something else. Such as perhaps the fact that the greenhouse effect may not be the most important mechanism by which increased CO2 can impact the climate. AFAIK no research has been done on the effect of changed atmospheric pCO2 on evaporation/condensation of cloud droplets vs. snow. Maybe I just didn’t know how/where to look, or maybe nobody’s thought of it.

I also don’t say much about the huge political risks associated with artificial increases to energy prices/costs, although I usually try to include enough that nobody will think I’m advocating them. Plenty of other people, with more impressive credentials, are blowing that trumpet.

Recipe for garum: Start with the desired type and part of the fish—e.g., mostly small fishes including intestines—macerate with salt and allow to rot (cure: ferment, liquefy) in a covered pot (or ancient vessel known as an amphora) in full sunlight for about 3 months in dry, warm weather (location of fermentation should be on a hill in a deserted area of an island remote from sensitive neighbors). Draw the clear liquid off the top of the pot for use with other foods for that umami flavor (glutamic acid). An herb-infused decoction of the remaining sludge may be used as a nutritious flavor-enhancing sauce. Or, just buy some monosodium glutamate (knowing that much 2000 years ago could have made you the richest person in Rome — with your umami flavor enhancing salt — and yet today it’s probably not long before “Bloombergs” ban it from NY restaurants).