Google’s Goal: Renewable Energy Cheaper than Coal

Mountain View, Calif. (November 27, 2007) – Google (NASDAQ: GOOG) today announced a new strategic initiative to develop electricity from renewable energy sources that will be cheaper than electricity produced from coal. The newly created initiative, known as RE<C, will focus initially on advanced solar thermal power, wind power technologies, enhanced geothermal systems and other potential breakthrough technologies. RE<C is hiring engineers and energy experts to lead its research and development work, which will begin with a significant effort on solar thermal technology, and will also investigate enhanced geothermal systems and other areas. In 2008, Google expects to spend tens of millions on research and development and related investments in renewable energy. As part of its capital planning process, the company also anticipates investing hundreds of millions of dollars in breakthrough renewable energy projects which generate positive returns.

But in 2011, Google shut down the program. I never heard why. Recently two engineers involved in the project have given a good explanation:

But the short version is this. They couldn’t find a way to accomplish their goal: producing a gigawatt of renewable power more cheaply than a coal-fired plant — and in years, not decades.

And since then, they’ve been reflecting on their failure and they’ve realized something even more sobering. Even if they’d been able to realize their best-case scenario — a 55% carbon emissions cut by 2050 — it would not bring atmospheric CO2 back below 350 ppm during this century.

This is not surprising to me.

What would we need to accomplish this? They say two things. First, a cheap dispatchable, distributed power source:

Consider an average U.S. coal or natural gas plant that has been in service for decades; its cost of electricity generation is about 4 to 6 U.S. cents per kilowatt-hour. Now imagine what it would take for the utility company that owns that plant to decide to shutter it and build a replacement plant using a zero-carbon energy source. The owner would have to factor in the capital investment for construction and continued costs of operation and maintenance—and still make a profit while generating electricity for less than $0.04/kWh to $0.06/kWh.

That’s a tough target to meet. But that’s not the whole story. Although the electricity from a giant coal plant is physically indistinguishable from the electricity from a rooftop solar panel, the value of generated electricity varies. In the marketplace, utility companies pay different prices for electricity, depending on how easily it can be supplied to reliably meet local demand.

“Dispatchable” power, which can be ramped up and down quickly, fetches the highest market price. Distributed power, generated close to the electricity meter, can also be worth more, as it avoids the costs and losses associated with transmission and distribution. Residential customers in the contiguous United States pay from $0.09/kWh to $0.20/kWh, a significant portion of which pays for transmission and distribution costs. And here we see an opportunity for change. A distributed, dispatchable power source could prompt a switchover if it could undercut those end-user prices, selling electricity for less than $0.09/kWh to $0.20/kWh in local marketplaces. At such prices, the zero-carbon system would simply be the thrifty choice.

But “dispatchable”, they say, means “not solar”.

Second, a lot of carbon sequestration:

While this energy revolution is taking place, another field needs to progress as well. As Hansen has shown, if all power plants and industrial facilities switch over to zero-carbon energy sources right now, we’ll still be left with a ruinous amount of CO2 in the atmosphere. It would take centuries for atmospheric levels to return to normal, which means centuries of warming and instability. To bring levels down below the safety threshold, Hansen’s models show that we must not only cease emitting CO2 as soon as possible but also actively remove the gas from the air and store the carbon in a stable form. Hansen suggests reforestation as a carbon sink. We’re all for more trees, and we also exhort scientists and engineers to seek disruptive technologies in carbon storage.

How to achieve these two goals? They say government and energy businesses should spend 10% of employee time on “strange new ideas that have the potential to be truly disruptive”.

Post navigation

100 Responses to Why Google Gave Up

Well, this is a quite sad and terrible news! Without giant tech enterprises, to expect a solitary mind or group to do a breakdown in energy is even more and more unlikely…And the graphs are scary…How many negacionists will be blind yet?

John, whenever I hear people speak of carbon capture, I immediately try to envision the scale of that undertaking, but I never succeed.

Presuming a reasonably efficient, scalable and effective process that in fact sequesters carbon with sufficient durability, what is the scale of material to be processed and the scale of energy expended to do so? Will those inherent system costs (even setting aside the capital costs that would otherwise have been available for different climate-friendly measures) ever make capture a viable strategy?

With your flair for making numbers meaningful, could you someday explore that problem?

GB: Agriculture is the world’s biggest industry; we should take advantage of it. That’s what gave Bob Metzger and me the idea: collect farm waste and sink it to the bottom of the ocean, whence it shall not return for 1000 years. Cheap, easy, doable right now.

JB: But we have to think about what’ll happen if we dump all that stuff into the ocean, right? After all, the USA alone creates half a gigatonne of crop residues each year, and world-wide it’s ten times that. I’m getting these numbers from your papers:

Since we’re burning over 10 gigatonnes of carbon each year, burying 5 gigatonnes of crop waste is just enough to make a serious dent in our carbon footprint. But what’ll that much junk do at the bottom of the ocean?

I’ve been away for some days – but this image haunts me! A straw bale in deep ocean! (I’ve seen it before on this magnificient blog…)

It is the paradigmatic epitome of rocket scientists having lost any sense for the ground below their feet. They will not save us. Instead, in their frenzied attempts to save us by continuing simplified linear causality thinking, they will only accelerate ruin…

Crazy how I have to throw this Einstein quote at them hominids almost every day: “We cannot solve our problems with the same thinking we used when we created them.” Especially when it isn’t any serious thinking at all. Like, destructive agriculture, not an unsignificant factor in global warming and resource depletion. Life is all about re-cycling of stuff (the paradoxic enigma of metabolism, individual and ecosystemic). Continuing de-cycling of life stuff, like throwing straw bales in the deep ocean which could instead help re-grow soil and soil organic carbon, is, sorry, not a solution but ridiculous nonsense bodering to the satirical.

I agree that saving humanity by throwing bales of agricultural waste into the ocean goes against the ‘permaculture’ philosophy Thomas Fischbacher advocated in week315:

Flow analysis can be an extremely powerful tool for diagnosis, but its utility goes far beyond this. When we design systems, paying attention to how we design the flow networks of energy, water, materials, nutrients, etc., often makes a world of a difference.

Nature is a powerful teacher here: in a forest, there is no “waste”, as one system’s output is another system’s input. What else is “waste” but an accumulation of unused output? So, “waste” is an indication of an output mismatch problem. Likewise, if a system’s input is not in the right form, we have to pre-process it, hence do work, hence use energy. Therefore, if a process or system continually requires excessive amounts of energy (as many of our present designs do), this may well be an indication of a design problem—and could be related to an input mismatch.

If we need to suck carbon out of the atmosphere it would be nice to make something useful out of it, like biochar, rather than ‘dispose’ of it and hope that doesn’t cause yet another problem.

It would be fun to see an argument between you and Benford.

(By the way, I keep wondering how true it is that “in a forest, there is no waste”. Maybe forests are fine—but in bogs, extra carbon accumulates as peat and eventually becomes coal and oil… until those pesky humans figured out how to reuse it! We closed the loop, but so suddenly we’re shocking the system.)

Thanks for the refreshing Fischbacher quote. I wouldn’t call it permaculture philosophy – that sounds a bit like greeny hippie stuff, while it is straight forward natural systems science. In my comment above I forgot to say that the term “farm waste” indicates a bad understanding of farming…

But then, yes, there is some waste in nature. Oil and coal is exactly this. For the “Gaia” system is neither a perpetuum mobile nor are there stable boundary conditions: Milankovich cycles, plate tectonis, and above all, the sun slowly gets hotter. So, according to “Gaia philosophy” coal and oil can be seen as waste products of climatic stabilization. (I don’t care about the “post hoc ergo propter hoc” fallacy here. Life is magic, so some magical thinking can’t hurt in the epistemology of life as long as you keep aware of this.)

I still regard “biochar” the future of agriculture (assuming hominids are interested in any future) and the only viable technique to repair the carbon cycle. (There might be a “proof” of this proposition by thermodynamics.) Char coal seems to slowly but surely gain some steam with progressive farmers. I plan to re-view current science soon and perhaps add to the Azimuth wiki writeups. (Currently I’m finishing my carbon negative wood pellet oven prototype (plus, my crazy math paper).)

They gave up too early. Solar PV got much cheaper meanwhile and continues getting cheaper (following an old law of electronics whose name I forgot). Plus, cheap and powerful enough batteries are now getting available.

Serious economists should factor in the huge external costs of fossil power generation. IMHO $0.06/kWh is delusional and amounts to saying: Killing ourselves is the most economic thing to do.

On carbon sequestration my old mantra: Biochar and non-destructive agriculture can do the job. Alas its not glitzy enough for our rocket scientists. Here’s a recent inspirational book: Kristin Olson, The soil will save us. (Actually you can combine rocket science and char coal production: Combine a micro gas turbine with wood gas production from pellets to construct the hybrid car of the 21st century. Someone should tell the Google car engineers…)

Serious economists should factor in the huge external costs of fossil power generation.

True! One big limitation of the Google initiative is that it was ignoring this, focusing on ways that companies could solve the carbon emissions problem while making more money than they do now, without any government intervention like a tax on carbon or cap-and-trade system. This makes some sense, since they are a business, not a government. But they ran into a wall.

Solar PV got much cheaper meanwhile and continues getting cheaper…

That’s true. The engineers who wrote this paper—which is really better than my summary—have presumably not stopped paying attention to reality:

So, presumably they still think solar is not cheap enough and “dispatchable” enough, and will not foreseeably become so, to solve the problem in time. Maybe they’re right, maybe not. It would be great to engage them in a dialogue about this. Maybe I should try!

I’ve proposed a “Woodgas hybrid upgrade pack for electric vehicles” in 2011 here: http://www.azimuthproject.org/azimuth/show/Experiments+in+biochar . If someone gives me a million and 2 years and a lab, I will try build one. Oh yeah, “wood burning cars” sounds so retro WWII tech. But with pellets and electricity it might a fun play ground for the modern rocket scientist.

I don’t quite believe their rationale. 350 isn’t a hard threshold: every incremental step to reduce carbon emissions reduces the rate at which the globe warms, and reduces the final level. And they’re smart enough to know that.

In 2008, solar and wind were extremely expensive. By 2011, Google must have seen the cost curve and how solar and wind would hit parity with coal within a couple years, as they now have in many areas. They aren’t dispatchable in the usual sense, but you can run at least 30-40% of your grid off those without issue with current technology. In particular, solar handles the peak load quite well in the mid latitudes (daytime A/C and industrial activity), which is largely when you need dispatchable power.

This must have dramatically reduced the potential market for Google’s moonshot. My bet, with no inside information to back it up, is that the Google leaders calculated the expected value in 2008, decided it was positive, went for it. Then in 2011 they re-evaluated, found that the probability of success hadn’t increased as much as the payoff had declined, and pulled the plug.

Well, unless I misunderstand your incredulity, the trouble is that drop from the present 400 ppm to 350 ppm won’t happen for at least hundreds if not thousands of years naturally, even if we could magically zero emissions now. CO2 gets scrubbed from atmosphere by natural systems, but it is very slow by human time scales, even if quick by geological time scales.

I think we should try to set a useful target for CO2 concentrations — so, a target that’s above 400 ppm, not too low (since that’ll be unachievable in the short term) and not too high (since that’ll be too dangerous).

It’s kind of strange to see a company like Google only focus on dispatchable generation. I guess it’s mostly for industry, maybe even something they were looking to use given how power hungry their infrastructure is. Solar is really inexpensive too, but we need to either come up with really inexpensive storage, which I don’t think is likely, or sync up our usage with solar output, which is a little more likely, or most likely some of both (mostly DSM).

At the household level, I think it’s possible if there aren’t any large loads at night and the occupants can be organized and flexible enough to move most of their usage (heating and cooling, appliance use, etc…) into the day time. I imagine kits that allow for household level DSM will be available sooner or later. It’s something that an individual could stitch together using a computer, powerline devices (x10, z-wave, etc…), and a grid-tie inverter (SMA comes to mind, but there are likely more) that provides real time data so the computer can turn devices on/off to match load to production.

One of the largest night time loads, EVs, would also help clean up the transportation sector, but they would need a lot of daytime storage, which would make them more expensive. That also might be manageable, but it would require more of a public/private/utility partnership. I could see an EV owner buying part of a solar farm that’s built off-site by a utility, or on-site by a business, and in exchange they would be able to charge their car at work. If they leave, then they another employee can choose to assume the lease/loan, or opt out. I think that’s how Solar City works via liens on homes.

Something I’m wondering about is if homeowners would start going off-grid if the cost of off-grid kits drops below the price of electricity from the grid. That wouldn’t bode well for utilities trying to clean up the grid. The more consumers are available for DSM on the utility scale, the better they can manage non-dispatchable renewable generation.

scC02 should be used for fracking/sequestration. If we took the CO2 from various plants and transported it and sold it to oil companies they could get more oil out (we still need it) and sequester the CO2 in one swoop. This could prove particularly effective since much of the scCO2 would combine with mineral deposits to create carbonate (the general theory of sequestration). The problem is that there are not enough incentives for businesses to sell CO2 as a byproduct of energy production and fewer for oil companies to use it even if it is more abundant, less damaging to machinery, and probably cheaper in a unfettered market. If we create those incentives with public tax breaks the money coming back into the economy would be more than enough to cover the tax breaks. I read an article recently about this…but I can’t remember where.

The sobering realization from much reading on these is not only that the prospect of going negative on emissions incredibly expensive, it is unrealistic to expect it to start until we are at zero emissions already. That’s because the process of taking CO2 from air is difficult and energy intensive on its own, and to make it harder by continuing to pump CO2 into the air is crazy. The upshot is that to get to the place where carbon dioxide removal is possible, the world first needs to pay the cost of transitioning to carbon-free energy anyway, and then pay the large cost of setting up carbon dioxide removal.

As things are going, if these big costs are embraced, it’ll only be in the setting of an imminent emergency. That means both will need to be done quickly. Transition to carbon-free energy is sensible, and can be relatively painless if done slowly enough. But if done quickly, the economic disruption will be huge.

Net: We’re painting ourselves into a corner where the economic costs are shockingly large, whether we mitigate or not, and whether we do carbon dioxide removal or not. Any analysis quickly realizes it’s better to stop building carbon-based energy infrastructure now. The only rationale for not doing so is simply defying the science of radiative forcing through anthropogenic greenhouse gas emissions.

I am thinking a system for developing country.
Usually a client go in a gas station to buy fuel, but in country with limited resources it can be possible to go in an electric station to buy, and sell, electricity from renewable resource: it is enough a battery pack, electric outlets and electric inlets, some electric meters, and a fast recharging.
It can work for smartphones, computers, and it can be used to sell wind turbine electricity, small hydro electricity, pedal powered electricity with little battery pack to transport the energy.
If there is not an electric distribution network, then the lead battery pack can work like an alternative, to get lighting with led; so that there is a development of mini generation plants.
I am thinking on the choice of google.
I am thinking that google can buy the best little working power plant (the best choice for each quarter) to use the energy in the server farm, and to verify the cost of the choice: the buying finances the best producers, and this permit an indirect financing of the best industry; it is possible to use google to advertise the use of these system. So google does not spend in research, but google can buy research.
I am thinking that the study of the sedimentation of the abyssal plain can give information of the right agricultural waste (without decomposition), that can be obtained with cultivation of algae; for example, I am thinking that can be possible to study the sedimentation under the Sargasso Sea, and to try to cultivate on a large scale the right algae.

I am thinking that the carbon dioxide can be stored in the ground, and in the structures, efficiently.
Each plant has a root system that transport carbon dioxide in another form in the ground, if there is an efficient method to plant trees (or bushes) so that there is not interference between cut trees, and planted trees, then the enrichment of soil carbon can be optimized.
If there is a optimal choice of cultivated plant (eucalipto, juniper, camelthorns) for construction, and root system, then there is a double storage: for example the lifetime of a wood house can be half a century, and the same is true for cross tie, household furniture, and the technology can extend the lifetime of the wood without chemicals.
Some insects store carbon in buried waste, for example termites, ants, so that there may be some organic farming that increase the carbon storage, without destroy the insects.

John, you might be disappointed, but I can’t imagine how you could be surprised.

The critical passage from the article is here:

“Let’s face it, businesses won’t make sacrifices and pay more for clean energy based on altruism alone. Instead, we need solutions that appeal to their profit motives.”

No, please. Can we REALLY face it?

The emergency is a socio-cultural crisis of values, not a technological challenge. Faith in a miraculous technological breakthrough is misplaced; any “breakthrough” must serve or at least not undermine the neo-liberal agenda of profit & growth at any cost, or it will be torpedoed. Google never was going to be part of any real solution; they are part of the problem.

There already exist two effective “disruptive technologies” for undertaking the required change. Education is the long game; the short game is politics. On the former, see David Orr; on the latter, see the recent work of Naomi Klein, and David Korten and etc.

Unfortunately, those who populate America’s halls of power – and ivories towers, for that matter – are unlikely to be amused by arguments linking progress on climate and other enviro issues with truly “disruptive” measures that address social and economic inequalities, either nationally or globally. And the broad masses can’t spell solidarity, let alone feel it. Occupy had its head and heart in the right places – and Americans showed us exactly the breadth of appeal and shelf-life of that idea.

The Transition Town people are on the right path, but no one would dare run on that platform. Paul Kingsnorth (see “Dark Ecology”) is right: we need to see the larger cultural narrative, and recognize it as a tragic one; to cross our fingers for a happy technological ending is to walk bind toward the reckoning.

Implicit in the Naomi Klein narrative is the idea that “freedom isn’t free”, in the sense of donuts, not Liberty Bell. In other words, there is a “price of anarchy” (review article) for permitting people to Do Their Own Thing, in this case, pollute the Commons. I think that really digs at the foundations, and the problem may not be one which Klein’s politics can solve because of it, since it is contrary to shared values. I’m not espousing a technocracy, but, just like management of wars, there’s only so much a representative democracy can do in crisis times.

Related aside: The Bank of England recently prodded UK insurers with a letter asking questions pertaining to climate change, specifically to their solvency and whether or not, because of climate change, they oughtn’t demand increased reserves for pay-outs. (See http://goo.gl/XP9HzQ, possible paywall.) This has, apparently, been going on for a time, but some insurers reaction to the queries and discussion is instructive. Some have said, essentially, “Look, this is not our responsibility to deal with things of this scale. It is yours.”

John, you might be disappointed, but I can’t imagine how you could be surprised.

Indeed, at some point in my article I wrote:

This is not surprising to me.

That was only about their statement that a 55% carbon emissions cut by 2050 would not bring atmospheric CO2 back below 350 ppm during this century. But indeed I’m not surprised that Google gave up — given what now seem to be the constraints they imposed on themselves: figuring out how to dramatically reduce carbon emissions in a way that’s profitable without anyone ever imposing a price on carbon.

Right now the engineers Ross Koningstein and David Fork seem to be saying that we’ll avoid a “ruinous” situation only with the help of wonderful inventions that they have no idea how to invent: a cheap source of distributed “dispatchable” electric power, and a cheap way to suck carbon dioxide from the air. Their recommendation is mainly that people spend more time trying to invent these things. And if nobody does? Apparently they think we’re all screwed.

Given this analysis, which is actually quite interesting as far as it goes, I’m surprised that they don’t go on to mention even the most widely discussed possible ways out of this impasse: a carbon tax, or cap-and-trade system. Even corporate types, of a sufficiently forward-looknig sort, should be thinking about how to profit from such a setup!

On the other hand, I’m not surprised that ideas like Transition Towns are outside their view of what’s possible.

Sorry John, really should have finished by mentioning what a pleasure it is to drop in here. I really marvel at the time and effort you put in – what’s your secret? Homegrown kale? Would like to hear your thoughts on how the blogosphere is evolving as public sphere for science…

I did notice your “surprised,” but as you indicate, I wasn’t talking about the same kind of “surprised” ;-)

And, you wrote:

“Their recommendation is mainly that people spend more time trying to invent these things. And if nobody does? Apparently they think we’re all screwed.”

I am fine with conversations about and even efforts toward inventing things, but we may soon find it equally – or more? – practical to start some philosophically-grounded discussions about what being “screwed” actually means. Undoubtedly, as global capitalism eats up earth’s life systems, as it eats itself, the process will deprive many of much. What will it grant in return? Here Kingsnorth is wise, and in plain language (Morton I am simply unable to understand).

I like good news. For example, pricing carbon already happens in North America: BC has a very successful CO2 tax, introduced in 2008 (http://goo.gl/Cc6ybf) and I expect our Canadian federal election this fall will see 2 of 3 national parties with some version of the same in their platforms. Quebec recently extended indefinitely its moratorium on fracking, and withdrew the exploratory permit for Trans Canada’s Cacouna tanker terminal in the St Lawrence, effectively delaying any progress on TC’s Energy East pipeline (especially in Quebec, these developments are driven by a distinct set of social-political-cultural values. Are those values translatable? At least a few Vermonters seem hopeful… http://goo.gl/NF3dhm).

Thanks, I’m glad you like this blog. I don’t think of it as requiring “effort” – if I did, I’d probably be exhausted! I do it because I enjoy it.

This year California’s carbon cap-and-trade system has expanded to include gasoline and diesel: distributors now must buy and submit permits covering greenhouse gas pollution tied to the fuels they sell. I’m happy about that small step. Petroleum companies claim the sky would fall down but so far it hasn’t happened.

But as you say, we need a tsunami of good news.

I think the blogosphere has a lot of potential as a place to do science, but I don’t see it being exploited nearly enough. I’m trying to do my research online, in public view. It’s a great way to get insights from intelligent people who pass by… and my talk on climate networks was completely reliant on work done by Azimuth Project members on the Azimuth Forum, and explained here on this blog. But I don’t see many people doing this! These still seems to be this idea that research should be done secretly and revealed only when ‘published’ — often in a journal that’s not free to the public, belying the term ‘publication’. I’m tired of that old approach.

John, to which extent do you think such decisions take into account the obscurity related to money? That is, the concept of money seems to have an implicit relation to the present and recent past.

I mean, if the economic models of power system were based on units of work, I surmise things appeared somewhat different. In other words, if one calculated how much work we gain by investing one unit of work into production -including the work needed in manufacturing all equipments, transportation costs, balancing of all side-effects, such CO2 releases, and so on- this does not necessary match with the cost estimations made in units of money.

The reason for such reasoning is, it seems the rapid development of the society during the 20th century had to do with the very small ratio between the work needed in producing fossil fuels and the work generated by burning them. For, it seems in the end of the day quite a bit of human activities are about work in the sense of physics. Say, the virtual services enabled by computers or the intellectual work of artists and scientists all involve quite a bit of (mechanical) work to manufacture equipments and instruments, to run all sort of machines, to transport goods and people, and so on. In this sense it does not appear that misleading to say money is, or should mean, capability to buy work in the sense of mechanics and physics.

If so, then a small ratio between invested work and obtained work results in an exponential process; the more we invest in producing work, the more work we gain to do other things. However, conversely, it is also easy to see that rather small changes in this ratio have quite a significant effect in the long run. I wonder to which extent the commonly employed business economic models rated in units of money involve such an effect?

What do you think?

Completely a side issue: The so called Kron’s diakoptics seems to be related your work with networks. Have you worked out what Kron’s work is about in terms of categories?

They couldn’t find a way to accomplish their goal: producing a gigawatt of renewable power more cheaply than a coal-fired plant — and in years, not decades.

They could not find it, because there is no way to do that in the mental frame they have restricted themselves.

Wind and geothermal suffers from inherently low power flux densities, therefore their environmental impact is inevitably detrimental. Solar PV is an excellent technology in the inner solar system, except on planetary surfaces, including Earth. Here the case is similar to that of other renewable sources. On top of that, it is necessarily intermittent due to the spherical shape of planets.

In open space solar PV is okay, because the sun shines 7×24 and there is no weather or anything else to cast shadows. Once we have this huge thermonuclear reactor nearby anyway, with some 10 MW per square meter output power flux density at its surface and a diameter more than 1.3 million kilometers, furthermore, we don’t even know how to put it into standby, we can as well use it to generate electricity for the next few billion years. Anywhere, but on the surface of our own planet.

On Earth we shall never be able to generate electricity directly from sunshine cheaply, simply because, as a matter of fact, there is bad weather and nights, and transient storage of electricity is both inefficient and expensive.

However, it may be profitable to use it in other ways, when the end product can be stored cheaply. Like manufacturing hydrocarbon fuels using solar generated Hydrogen (using water) and coal (or, in several centuries from now, limestone, if coal reserves get exhausted). Another opportunity is desalinating sea water.

We may well be a technological breakthrough away from making these processes economically viable, but that’s a different story. With molecularly precise nanomachines, manufactured by self replicating programmable molecular robots, it is certainly possible. A collateral advantage is that carbon being so much superior to any other structural material, ordinary manufacturing would drain carbon dioxide from the atmosphere directly (the only way to bypass transportation costs). That’s large scale self-financing sequestration. So much so, that we’ll have to worry about how to replenish it afterwards to make plant life possible, therefore it only makes sense to preload the atmosphere with it while we have the chance.

Otherwise the only option we have in the long run is nuclear fission. I mean it. And no, I am not forgetting about fusion power. We have an abundant supply of Deuterium indeed and Tritium can be manufactured at will, but it is inherently difficult to maintain the high temperature needed for Deuterium-Tritium fusion in a strictly controlled environment. But even if it could be accomplished some time in the future, the fact remains that neutron production of Deuterium-Tritium reaction is extremely high, about thirty times higher than that of nuclear fission for the same energy output. Aneutronic fusion, on the other hand, is a pipe dream, because it requires even higher temperatures than that (about ten times higher).

Neutrons are very difficult to contain and they do lots of structural damage to any solid they come into contact with. Also, it is next to impossible to avoid transmutation of elements into radionuclides, therefore we are left to deal with a large amount of radioactive waste.

Fuel for nuclear fission is plentiful. One ton of ordinary granite, the default stuff continents are made of, contains as much retrievable energy as fifty tons of coal plus 130 tons of atmospheric Oxygen, in the form of trace amounts of Thorium and Uranium. It means we shall never run out of this resource, not until the Sun turns into a red giant and consumes Earth, but our doubleplus transhuman progeny will have much else to worry about well before that happens. We can only hope they can manage somehow. Anyway, in this respect being doubleplus transhuman may be an advantage.

In other words, it is a renewable energy source in the same sense the word is used otherwise. However, we have a thousand times better ores than that for the next few thousand years, so we have plenty of time to accommodate to scarcity.

But, but. Nuclear is awfully dirty, dangerous &. dire, is it not?

No. It is not. I mean there is a difference between nuclear and nuclear.

Our current nuclear power plants are dangerous indeed, they are nothing but Cold War relics, Plutonium factories that also make some energy as a byproduct.

Otherwise no sane engineer would operate a process at an unstable equilibrium point, drive steam through a hot &. highly poisonous core at high pressure, would put tons of solid fuel into an environment dominated by neutrons and operate a system that needs active cooling for an extended period even after shutdown. Not even mentioning Chernobyl type graphite moderated reactors, when everyone with a bit of training in elementary chemistry knows what happens when steam meets hot carbon. And all this for some 0.5% fuel efficiency. No wonder there is a huge amount of radioactive waste left behind, with lots of long half life isotopes in it, that’s perfect for manufacturing nuclear weapons, but useless otherwise &. need guarded deposits for hundreds of thousand years.

Let’s be honest, only crackpots would design &. operate such a horrible system, if not pressed beyond the breaking point by military needs.

Fortunately there is a much better alternative. It was originally developed in the fifties, for a nuclear powered aircraft, that could stay afloat for months, much like nuclear powered submarines under water. It was an inherently crazy idea, made obsolete soon by the advent of intercontinental ballistic missiles.

The reactor design is innovative indeed. First of all it is inherently stable. If a volume in the core gets overheated, it expands, which stops the chain reaction. Then it has only kilograms of fissile material in its core at any one time, not tons of it, so even in a worst case, low probability, high impact scenario, when the core gets a direct hit from a large meteorite (or blown up by terrorists), nuclear contamination of the environment is negligible compared to current designs. Also, the molten salt has a very high boiling point, so no gases are produced under any circumstances. The fuel itself is dissolved, not solid, so no neutron damage to its crystalline structure or further meltdown can occur. There is no water in the vicinity of the core or no any chemically reactive substance, so explosion is impossible. It is operated at atmospheric pressure, therefore volume of the reactor building is thousands of times smaller and proportionally less expensive, but at high temperature, so it can drive a gas turbine directly with an inert working fluid like nitrogen, with no steam whatsoever and at high thermal efficiency. Its fuel efficiency is close to 100%, which means volume of radioactive waste is a hundred times smaller for the same energy output, what is more, few long half life isotopes are left in it, so it does not have to be stored in a safe place for an untold number of millennia, only for centuries, after which radioactivity of waste decreases to background levels. It can also use Thorium as a fuel, which is much more abundant, than Uranium, especially the 235U isotope. Also, it does not need active cooling on shutdown, which excludes a Fukushima type disaster. Last but not least, it is next to impossible to weaponize its intermediate or waste products, so it also gains points in the anti-proliferation field. As a last resort, there is a large, well insulated underground chamber below the core, connected to it by a pipe with a plug in it made of the same salt, cooled by a small ventilator to keep it solid. If in spite of all odds the core gets overheated somehow or the local cooling stops working, the plug melts and the stuff is drained to the safe chamber.

I can see no sane reason we can’t do at least as well as the guys fifty or forty years ago, in a shorter timeframe. Let’s consider the MSR only as a proof-of-concept design and figure out something even better (no actual technological breakthrough is needed), then put it into production on a large scale. Electricity produced by such a system would be almost certainly cheaper than any alternative, including coal, and much nicer to the environment, even if only actual pollutants are counted, not carbon dioxide.

I also don’t see how we get to where we need to be without pushing nuclear power, and a new kind of nuclear power. Indeed, I believe Dr Hansen has remarked, possibly in his Storms book, that the decision of the Clinton-Gore administration to end the fast breeder work, as encouraged to do so, makes the environmental lobby that urged that and Clinton-Gore own some culpabililty as pro-fossil fuel lobbies for failing to have us ready to ready for this challenge. I don’t know how the size of “culpability” is measured, but surely there’s some ownership there. This is why I continue to argue many environmentalists and progressives don’t get it, they don’t understand how urgent a problem this is.

It is urgent indeed, if not for the environment, then because if we do not do it fast, China gets the chance. It is a rather bleak prospect for the rest of the world to be reduced to vassal states under the fat thumb of a communist dictatorship.

To hell with these pathetic wars for oil, a commodity, which could be manufactured in abundance anywhere from common materials with cheap energy.

1. Many people have pointed out that the price of photovoltaic solar power has dropped substantially since 2011, and seem likely to keep dropping. Did Google predict this when they decided to cancel their RE < C project? Would the news since 2011 change their analysis now? Some say solar power is now on the brink of undercutting the price of coal power.

2. Did Google consider nuclear power? If so, why did they think this was unable to meet the RE < C challenge? Was it due to technical issues, the regulatory situation, or popular sentiment against nuclear power? Nothing in the article mentions nuclear power.

Any other really important questions? Google put some work into this project before giving up, so we should try to gain an insight into whatever they learned, or believed.

There was some Google Talk on alternative nuclear, with caveats (“the views or opinions expressed by the guest speakers are solely their own and do not necessarily represent the views or opinions of Google Inc.”).

…in the form of trace amounts of Thorium and Uranium. It means we shall never run out of this resource, not until the Sun turns into a red giant and consumes Earth, but our doubleplus transhuman progeny will have much else to worry about well before that happens

nuclear is not renewable

In a blog post at our blog I collected in 2007 the rough estimations given by the rather pro-nuclear Association World-Nuclear. It was expected that Uranium 235 “holds” roughly 50 more years (that is this is the peak). It gave a factor 60 for Uranium 238 and a factor 40 for Thorium, which gives 5000 years which is basically nothing with regard to the current life expectancy of the earth.

That is, average concentration of Thorium at 10 ppm is about 5 times higher that that of Uranium.

Now, mass of the upper mile of crust is about 2.2 billion gigatons, which contains roughly 22 thousand gigatons of Thorium. You are not trying to tell me that it can be used up in 5,000 years, are you?

Retrievable energy content of that much Thorium is more than ten-to-the-thirtieth J, while current annual consumption is less than ten-to-the-twenty-first J. That’s enough for at least a billion years, by which time the the sun will get so bright anyway, that Earth becomes uninhabitable except for some serious geoengineering.

However, the crust is much thicker than a mile and on this timescale erosion and plate tectonics does its job for sure.

Therefore nuclear fission is a renewable resource.

You are talking about ore reserves, which is a completely different thing and has nothing to do with reality in the long run.

Extracting low concentration stuff has its cost in free energy, however, the thermodynamic limit is only proportional to the logarithm of concentration, so that’s not an obstacle with advanced technology.

You are talking about ore reserves, which is a completely different thing and has nothing to do with reality in the long run.

The main reason why thorium is mostly extracted from Monazite ore is because of it’s high occurence in that mineral. As you can see in the below list of thorium occurences in minerals, the problem is if you turn to other minerals you have to “squeeze out” way more material, which may become incredible expensive and energy consuming.
That is – leaving out for a moment environmental concerns – in particular if your extraction process consumes more energy than you are eventually able to produce with then extraction makes no sense any more.

It is not so. And it is not a matter of political opinion, we are talking about physics.

It is quite easy to demonstrate, that in the thermodynamic limit energy requirement of extracting stuff is proportional to the logarithm of its concentration. Of course you can do much worse than that, but why would you? Especially if you are a distant offspring, five thousand years from now, with some brains supported by AI and technical abilities exceeding ours by at least the same proportion as ours do to that of the pyramid builders.

The logarithmic rule is a powerful one indeed. You only need 4.4 times more energy to retrieve Thorium from average crustal stuff than from Monazite. You may throw much more into the wind, but in that case you are doing something horribly wrong and the competition kicks you out of the energy market in short order.

if your extraction process consumes more energy than you are eventually able to produce

Come on, do the math.

Nuclear energy is a bit dense, you know, compared to chemical energy. Explosion of one gram of TNT releases five kJ, while there is a billion kJ in one gram of Plutonium, if it is put into a warhead.

This sounds like an algorithmic argument (if it exists) so where is the proof that this also holds for materials extraction?

Kinda. To extract an atom from a mixture, you have to identify it. Number of bits needed for identification is proportional to the logarithm of concentration. As soon as the atom is had, this information is destroyed. Thermal energy released is proportional to number of bits destroyed, equals to free energy needed to collect said info in the first place.

As thermal energy released on killing info is also proportional to temperature, you’d better keep stuff cool.

Nuclear energies are many orders of magnitude higher than chemical ones, therefore, once identified, the energy needed to break bonds holding the atom in place is negligible compared to the energy recovered on fission of its nucleus.

You may also want to look up chemical potential in a textbook. Plenty of experimentation there.

Interesting observations here. Some questions. You say it would take a century to get CO2 back to normal levels. Could you define “normal,” when it comes to CO2? Put another way, what period of geological time on earth had an acceptably “normal” level of CO2? Were there any humans around?

You also accept the argument that more CO2 is bad. Is it? Plants like it. And they want more. What is the minimum level of CO2 required for plants to survive? People have all along assumed that global warming, climate change, whatever they happen to be calling it at the moment, is bad. Because? What exactly is going on re climate now that is outside the boundaries of what has happened before on earth? What is the crisis? The singularity, the completely unique factor? Why now are we heading for heat death? And don’t say things are happening faster. That’s very hard to prove.

You also appear to accept the claim that human produced CO2 is THE driving force behind climate change. Can you tell me how you would separate out human produced (exhalations, power plants, cars and factories, etc.) from other sources of CO2 production? If you could, isn’t it likely that the human factor is probably less than 5% of the total CO2 produced on the planet? If we got rid of it all, what exactly would change in those ppm levels?

Also, if other factors are indeed involved in heating the planet up, and the planet is in a heating phase, that’s not arguable, present hiatus notwithstanding, why not go after them? Seems it’s war on CO2. Why? How about we go after the other greenhouse gases, none of which are as important to life as CO2. Tell you what I’d worry about. I’d worry if CO2 started to drop like a stone. I know, unlikely. But still, most people today, if they heard CO2 was going away, would weep tears of joy. They would be tears from corpses after a while. But you get my point. Scientific literacy isn’t all it’s cracked up to be.

Final question. In everything I’ve read, increased temperature precedes increased CO2 levels. Are you privy to data that say otherwise? Further, if this is so, then CO2 is in great part an effect, not a cause, of climate change. This has been a problem from day one, I suspect, separating out cause from effect in such a wildly dynamic process as climate on this planet.

A commentator mentioned Ms .Klein. I disagree with her, but she does have the gumption to take off the mask: free market capitalism has to go if the planet is to be saved. That’s what this is all about. It’s not science. It never was science. It’s politics. For a better analysis of how this may actually work out, read 1984. The citizens begged for a Dear Leader to save them.

I’m with everyone here who is in favor of nuclear (be it thorium or some other fuel). Seems to me we cannot make our way to the energy of the future, whatever that might be, without taking nuclear seriously, very seriously as a bridge fuel between here and there. Solar, wind, geothermal, nope. In addition to the financial flim-flam going on with them, they all suffer from geographic limitations, what modern society got away from when we figured out how to generate electricity anywhere, as opposed to hooking it up to a big drop on a fast flowing river. But, cf. Ms, Klein, modern society is the problem, isn’t it? I’m not so down on modern society I want to throw it away, or replace it with rule by the ‘enlightened,’ whomever they may be. In the past, it’s been murderers and plutocrats.

It would take centuries for atmospheric levels to return to normal, which means centuries of warming and instability.

I said something less sloppy:

Even if they’d been able to realize their best-case scenario — a 55% carbon emissions cut by 2050 — it would not bring atmospheric CO2 back below 350 ppm during this century.

George wrote:

Put another way, what period of geological time on earth had an acceptably “normal” level of CO2?

I wouldn’t talk about “normal” levels. It’s better to look at this graph:

You’ll see that during the glacial cycles we’ve been going through for the last 400,000 years (and in fact much longer), the CO2 concentration has varied between about 180 and 300 ppm. Then came the industrial revolution, and the CO2 shot up almost vertically to its present level of nearly 400 ppm, and continues to rise.

Were there any humans around?

Homo sapiens has been around for about 250,000 years. So, we’ve been through two glacial periods: the Wolstonian and then the Wisconsinian, the most recent one. Homo neanderthalensis appeared about 380,000 years ago.

That’s what this is all about. It’s not science. It never was science. It’s politics.

Actually this blog is about science, and technology. As I’ve often said, and repeated earlier in the comments on this article, I delete comments containing political rhetoric. Since this is your first appearance here I will let yours stand, but further attempts to start up a discussion of politics belong elsewhere and will be deleted.

Sorry about that. I hope what I ask is a legitimate scientific question. If not, please just ignore it and delete me.

I’m not trying to be coy here.

Your graph above goes back 400,000 years. If you push the starting point back beyond that, to, say, 50 mya, CO2 was much higher than it is today. If you go back even further (I don’t know why you would; or why you wouldn’t), to 600 mya, CO2 levels were astonishingly high by today’s standards.

Trend is a function of where you start your measurements. If there is a scientific reason why all CO2 measurements earlier than 400,000 years ago are irrelevant to current discussions, I’d be interested in knowing it because that longer term view greatly colors my thinking on the subject. If that view is absurd, scientifically pointless, I would be grateful to find out.

More directly put, if you stick your pin in the ground far enough back in time, the CO2 trend is down. If you start after we invented “carbon-based” machinery, the trend, obviously and undeniably, is up.

So my question is: what is the scientific basis for determining a starting point? I do not mean this to be a political question. I don’t think it is. If you do, though, just delete this. And truly, I apologize again. I am curious about this periodization issue. Context can be everything, in and out of science.

Also, if you have covered this before, just please point me there. I see your blog as primarily a math blog.

(1) It’s not as if badness associated with elevated carbon dioxide is because of some projection of a trend. The connection between increased planetary temperatures due to radiative forcing from increased greenhouse gases, including carbon dioxide, is both a laboratory and engineering result. The singular nature of carbon dioxide in the history of Earth’s climate (“Earth’s biggest control knob” –Alley) is determined from the paleorecord you cite, as well as general geological and paleontological evidence. The laboratory and engineering results are due to blackbody radiation. (There is a fascinating history of science read regarding this by Thomas Kuhn, BLACKBODY THEORY AND THE QUANTUM DISCONTINUITY, 1894-1912. Be sure to get the edition with the 1984 Afterward included.)

(2) The starting point is based upon the beginning of the industrial revolution, where, from recorded history, we know fossil fuels were first beginning to be burned in significant quantities. Doing that is based upon isotopic assays of excess carbon dioxide (and methane) which show these to be both relatively depleted in Carbon-13, suggesting a plant source, and in Carbon-14, suggesting age. Indeed, both those are indicative of a fossil fuel origin for both greenhouse gases. The accuracies of these assays are very high, being able, for instance, to distinguish proportions of marsh origin methane in the streets of Boston compared to natural gas leaks, these proportions corroborated by other measures.

(3) The relatively flat carbon dioxide-in-atmosphere profile highlighted in the figure for CE 1000 (and before) corresponds to a relatively stable climate. Given carbon dioxide’s role in the past, and the causal links between high concentration, forcing, and climate effects, the inference is that high concentrations of carbon dioxide will have dramatic effects, especially because of the abrupt uptick in these concentrations. The climate system is large and dispersed, and has big lags in it, so its response is not instantaneous. Also, oceans have much greater heat capacity than atmosphere, so it is no scientific surprise they are taking up much of the excess heat. Alas, that also means that “once it gets going”, like an oil tanker, it’ll be difficult to slow down.

Your graph above goes back 400,000 years. If you push the starting point back beyond that, to, say, 50 mya, CO2 was much higher than it is today.

Yes, and it was a much hotter world. The world had tens of millions of years to cool down to the temperatures that humans evolved in. This gave time for new life forms to evolve — like grasses — which do well in a cooler climate. The problem is that we are now increasing CO2 temperatures very rapidly, faster than most species — and perhaps even ourselves — can migrate and adapt.

The problem with global warming is not that “hot = bad”. It’s that “rapid climate change is hard to adapt to”. If you look at what happened during that temperature spike labelled PETM on the graph, you’ll see a lot of extinctions. And the temperature change was much slower then that it is today!

We would eventually face rapid climate change when the next glacial cycle began. But it seems the CO2 we’ve emitted has already postponed that by thousands of years. A truly clever course of action would be to turn up and down CO2 emissions to keep the Earth’s temperature rather stable. We hold the thermotat in our hands. But we’ve gone and turned the thermostat up to full blast.

So my question is: what is the scientific basis for determining a starting point?

I don’t think science provides a ‘starting point’ for measuring the Earth’s temperature. However, if we want to keep the Earth’s temperature from changing in a rapid way that causes us trouble, we’d better do something.

I too am interested in the argument [that the energy cost of purifying dilute materials goes as -RT ln mole-fraction]. It would be fun to see real-world experimental data, too.

The late John McCarthy somewhere points out that for an Earth composed of 10^50 atoms of other stuff and one of uranium, that rule still say the extraction should pay energetically. Getting one U atom to be a critical assembly is left as an exercise for the reader.

But does the rule in fact say that? The “ln” part is -115, -RT at 298 K is -2478 J/mol, and so the predicted minimum separation energy is 285 kJ/mol. The reactors powering about half of this computer get 154 GJ/mol, so yes.

There are practical considerations. Is the atom right at the planet’s centre? If so, is our planet-bisecting knife such that we can push it through, jack the planet-halves apart to get the atom, recover the work against gravity as we let the halves pull back together, and have the broken chemical bonds at the blade edge also pull back together and push the blade out, giving us back that energy too?

The price given there cannot undercut land-based mining, whose linearly projected end date has in recent years tended to move ten years farther into the future with each passing year. But the seawater extraction price per joule, for joules released in today’s power reactors, is far below the per-joule price of even the cheapest fossil fuel.

There are implications regarding concentration for other processes related to climate. American Institute of Physics (“AIP”) dismissed proposals for clear air capture of carbon dioxide based upon energy requirements for extracting low densities of the gas. Of course, that was before Klaus Lackner and his daughter, Claire, came up with their passive plastic extractors, http://www.earth.columbia.edu/articles/view/2523.

I don’t understand what’s being proposed. Is the case that olivine pulverzation is preferred in terms of energy to, say, artificial tree and their enhanced Urey reactions? I see the energy calculation for grinding, but how does transportation energy work in? Presumably, doing raster or semirandom distribution over large arid countrysides will require a few Joules itself. (Released balloons which drop olivine powder semirandomly? A lot of balloons!) For that matter, I wonder what the energy requirements are for distributing and setting up 500 million artificial trees? I like the passivity of the trees, but they do need water, and that rules out a lot of places where planting them would be ideal. (Unless the water gets trucked in, too …. *sigh*) I think there’s a hidden cost of both proposals, too, and that’s the cost of *dismantling* 500 million trees. After all, if CO2 gets driven well below pre-industrial, we’re inviting a different kind of climate change, an ice age.

That looks very significant — I wonder why we are not hearing more about this proposal. Here’s the abstract from the paper you linked to:

Human CO2 emissions may drive the Earth into a next greenhouse state. They can be mitigated by accelerating weathering of natural rock under the uptake of CO2.

We disprove the paradigm that olivine weathering in nature would be a slow process, and show that it is not needed to mill olivine to very fine, 10-um size grains in order to arrive at a complete dissolution within 1-2 year. In high-energy shallow marine environments olivine grains and reaction products on the grain surfaces, that otherwise would greatly retard the reaction, are abraded so that the chemical reaction is much accelerated. When kept in motion even large olivine grains rubbing and bumping against each other quickly produce fine clay- and silt-sized olivine particles that show a fast chemical reaction.

Spreading of olivine in the world’s 2% most energetic shelf seas can compensate a year’s global CO2 emissions and counteract ocean acidification against a price well below that of carbon credits.

However, there are difficulties with this proposal, not exactly technical ones, but difficulties they are.

1. Who is authorized to make contracts to perform CO₂ sequestration this way?
1. Who is going to pay for mining 7 km³ (23 Gt) olivine per year and dumping it into the sea?
2. Who is going to supervise the process to make sure reported quantities match delivery?
3. Who is going to supervise the supervisor?

Anyway, it awfully smells like a hotbed of corruption.

In addition, it is more likely than not, that resetting atmospheric CO₂ concentration to a preindustrial level would decrease foliage cover substantially in warm arid environments.

Indeed, Revelle and Suess assumed the oceans were well mixed, which they are, in fact, not. Accordingly, I wonder how much of the “olivine miracle” is limited by the constraints from the natural process of getting carbon dioxide to the olivine.

By the by, in case anyone’s interested, “hypergeometric” is Jan Galkowski of Westwood Statistical Studios. The handle comes from the name of my own blog, http://hypergeometric.wordpress.com, which is devoted to climate and Bayesian computation.

In the worst case (of low mixing in ocean and low rate of exchange of CO2 between ocean and air), we’d just have to let some of it weather in various places in the ocean, and some directly in the atmosphere — right?

In fact, if there is low ocean mixing, maybe that would have some benefits — this could then be used more easily than otherwise to mitigate effects of ocean acidification in certain localities, e.g. distressed coral reefs (since it would have a concentrated effect).

Uh, no. It brings into question the entire proposition. Remember, there are no costs assigned to transport or placement or disposition.

Further, if ocean is poorly mixed, that means while waters in vicinity of the olivine might become CO2 poor, waters farther away could remain rich.

But, this is naivete: CO2 does not remain as gas once in ocean … It turns into various ions. And their interaction is complicated. I don’t claim to understand that. After investigation, I do say that the idea of bathing raw olivine and having it pick up CO2 in whatever form is convenient avoids the question of how the CO2 gets nearby. I think the dropping of crushed olivine from balloons is the competitive proposal, which gets us back to the question of energy for grinding to optimal size, etc.

That looks very significant — I wonder why we are not hearing more about this proposal. Here’s the abstract from the paper you linked to:

Human CO2 emissions may drive the Earth into a next greenhouse state. They can be mitigated by accelerating weathering of natural rock under the uptake of CO2.

We disprove the paradigm that olivine weathering in nature would be a slow process, and show that it is not needed to mill olivine to very fine, 10-um size grains in order to arrive at a complete dissolution within 1-2 year

Why aren’t we hearing more about it? That question, I think, arises for all variants of enhanced weathering. Getting people to talk sense on it is a Sisyphean task I, for one, take long breaks from; looks like November of last year was my most recent foray before this one. (How would you characterize the responses?)

Our host may have some insight. Years ago, I used to mention enhanced weathering here. It becomes relevant in this thread; why does it again fall to me to mention it?

The paper mentions multiple days of stirring in water. Far from no grinding, that is — I suspect — a very inefficient grinding process that uses orders of magnitude more energy than modern mining equipment (whose usage I get as 25 kWh/tonne for 100-micron comminution, 50 kWh/tonne for 25-micron). Also, orders of magnitude more than are available in the “2% most energetic shelf seas”.

I read your “November of last year” link (just now), and the original article too. I guess the responses were one of each of a few stereotypical types: reasonable questions, useful link, and apparently-nonsense objection (though I would have liked to see your response to that last one, to check my guess that it was nonsense). That might not be too bad given the original article’s premise, which seems to take seriously the idea that maybe we should avoid geoengineering — or even doing research that might make it seem plausible — since it would reduce motivation to engage in other solutions some people deem morally superior (though they are clearly inadequate). OTOH I guess I don’t know how the commenters will be self-filtered by what they think of that article.

– There is a U.S. government (NAS) study comparing approaches, expected to be released by Fall 2014. (So the wikipedia page hasn’t been fully updated recently.) I wonder if anyone here knows the details and has read the report (if it’s out by now)? The study is:

The Geoengineering Climate: Technical Evaluation and Discussion of Impacts project of the National Academy of Sciences funded by United States agencies, including NOAA, NASA, and the CIA, commenced in March 2013, is expected to issue a report in fall 2014.

“An ad hoc committee will conduct a technical evaluation of a limited number of proposed geoengineering techniques, including examples of both solar radiation management (SRM) and carbon dioxide removal (CDR) techniques, and comment generally on the potential impacts of deploying these technologies, including possible environmental, economic, and national security concerns. ….”

I think there’s a hidden cost of both proposals, too, and that’s the cost of *dismantling* 500 million trees. After all, if CO2 gets driven well below pre-industrial, we’re inviting a different kind of climate change, an ice age.

In the strewn-olivine-powder approach, there are no “trees” and nothing to dismantle. Nothing but powder. Each 25-micron olivine particle consumes, and is consumed by, atmospheric CO2 in roughly one year of lying where it fell.

Then it’s gone. There is no chance of a sorcerer’s apprentice’s magic broom here. And I think that must also be true of Lackner’s “trees”: one certain way to stop them capturing, even sooner than one year, is to stop paying their electric bill.

The energy inputs I looked at were comminution to 25 microns and lifting to 5 km above ground level. Released at this height, 25-micron particles can ride the wind for about a day, so their deposition footprint, from each release point, would be ~1000 km^2.

Dirigibles to lift the stuff only 0.5-1 km and pay it out as they travelled, then return to the powder plant, would save some energy, but that’s not consistent with a do-nothing-until-a-Herculean-effort-is required approach, so I figured on a dense stream of the stuff being injected straight up and dispersing into a wind-riding plume at 5 km.

The slides you linked to are impressive and interesting. BTW on page 14 of that pdf, it looks like the diagram of the deployment of their improved collection system was “censored out” of the larger image by reducing the image resolution in a hand-drawn area. (At least it’s hard to explain what it looks like as accidental image corruption.) Yet they have photos and other diagrams of the actual system on subsequent pages. Any idea why?

(My reply above was to G.R.L. Cowan; this one is to hypergeometric.) Do you have an opinion on how close to viable that CO2 air extraction technology is (as a complete system which includes doing something with the CO2 you get)? That article, and some other things I googled and skimmed about it, didn’t make it look promising in the near term. (I’m sure some kind of extraction tech will work well eventually, once we have molecular manufacturing — I’m just asking about the technology you linked to.)

Lackner continues to improve his materials and techniques. The most comprehensive presentation I know of is a 2010 YouTube video (https://www.youtube.com/watch?v=ZUluQPVilYc). The great oceanographer and climate scientist, Wally Broecker, who co-created the idea of artificial trees, has summarized this work (see http://dcgeoconsortium.org/2013/12/15/wil-burns-broecker-on-air-capture-geoengineering/ and http://www.elementascience.org/article/info:doi/10.12952/journal.elementa.000009). I think Broecker and Lackner represent a view which might be called climate realism, one with which I’m increasingly coming to sympathize. Still, the estimates are that to capture all fossil fuel emissions and draw down atmospheric CO2 by 10 ppm per year would demand deploying and operating 500 million of Lackner’s tree units. That would be an industry about 20% of the entirety of the current energy industry and, while it would provide a good many jobs, would have a price per tonne of CO2 captured and removed of US$200/tonne to US$1000/tonne. The high end of that range is considered unaffordable.

Still, as glaciologist Richard Alley has said, Wally Broecker is a very smart scientist who has been right about a lot of things, and it’s probably not wise to bet against his position.

There is more to the issue of non fossil fuel to the grid than the cost and GHF impact: here in California, the stability and reliability of the power grid is controlled by the system operator, Cal ISO. They are tasked to keep the system in place, and if they have to cut down on the energy on the grid generated through wind or solar power, they will, and they have done it and project to do so, we are talking about thousands of MW at any instant that they would shave off the dispatchable generation. Now we have our Governor calling for 50% of CA (annual) generation to come from renewable compared to the current target of 33%. Can the ‘system’ handle it? ask CAISO. The situation is complicated when you take the big view, CA is only one segment of a larger grid, the Western Grid (WECC) that needs to work in sync to keep the entire region stable. Their decisions to allow (curtailing the solar or RE portion) is tied into the overall stability of the Western Grid. Google’s aim to make non fossil based electricity being able to compete with other sources and still achieve in the overall mix for GHG reduction, needed to be looked at the overall system. Gas fired generation is the larger portion of the electricity supply in this State.

Power systems are not my thing, but I have dabbled in control theory. I don’t know if the lack of progress here is due to the problem being genuinely hard and the technology not up to date, a dislike of an economically distributed system for the idea of central control (thereby incurring a big cost of anarchy), simple inertia on the part of owners of the electrical grid, or a lack of appropriate investment in R&D. I know in many grids stability is achieved by using massive turbines as offsetting loads rather than clever switching, and there is a parallel technological problem in robot arm control which, far back in the day, used to be solved by making arms massive relative to the loads they picked up. Now active controls are used, and it is surprising these aren’t used in grids.

Maybe they are and this isn’t the problem.

But given Azimuth’s interest in mathematical approaches to these problems, I wonder if there’s anyone in the membership who knows about these problems and could introduce us.

Thanks for the links and reminding me of this stuff! It might indeed be a problem set for Azimuthers…

I don’t know much details and I’m not an engineer — But in 2012 I was working 6 months for a major German company doing the daily electricity load forecast for Bavaria. It was yet another job validating my rule of thump about Tsherman industry: They are not interested in intelligence. Wasting billions with bad business and dragging heels is preferred over investing millions in smart heads. (And their biggest dream always is automating intelligence. Heck, even in software industry, where the programmerz are the first to get laid off, not the bean counters.)

Soon I was able to beat the black boxes by semi-automated eyeballing of historical data and doing a weighted average using just Microsoft Excel. Then my peers were also starting to improve things, e.g. getting a better solar prognosis, which was at the time based on Munich airport representing the whole area of Bavaria, which grew exceedingly stupid given the massive ramp up of solar power in Bavaria (still more than whole U.S. perhaps). The prognosis also turned out helpful for accounting data quality control. It could have saved them a few millions…

But heck, peanuts, as the German banker once said!

So, the big bosses weren’t interested and not even asking the world-class expert sharing my office. The whole prognosis job got moved north (who were performing worse) – and the switch was done December 31, my last day of work and technically the hardest: This was the worst day for a prognosis, and then even some data stream ripped, as happens at such dates. But the bosses didn’t care. Heck, we didn’t even do a parallel overlap to co-check results, no, we just switched the job north from one day to the next, during the most exceptional holidays. Luckily no brownout happened. Just heat up enough coal baseload backup…

Thank you so much for your interesting re-cap on forecasting of electricity loads.

I wonder how much, in your immediate experience, the lack of appreciation for your forecasts happened because the audience of users did not understand the problem, or did not understand the methods and, so, considered them equivalent to any ad hoc technique they could come up with?

I find it common in my own applications that consumers of my results want to know why they should trust them, when they haven’t the data or the time to develop a track record. It is difficult to convey that if the data, however limited are good, and everything I’ve been told is correct, and I have not made mistakes in modeling, the prediction is the best we can have, based upon principles. Sure, if it matters, I can develop alternative models, switch in different priors, and so on, and see where the forecasts land relative to one another. But the idea of “correct by construction” is something which seems to elude users.

The prognosis job as I found it had been set up and done for some 2y by someone who fell very sick then, and I was hired just as a long-temporary ersatz (yes, we Germans are LAO over Obamacare!). At the end I have not fought for keeping the job, as I haven’t been there for long, plus actually I’m not the early bird type (doing 2 hours hardcore data crunching before breakfast!) My 2 experienced peers shrugged shoulders and sighed – as they were doing almost every week upon latest news of the big bosses’ decisions (like, quitting a stored hydro extension project, or, investing with some slick Brazilean billionaire).

The big bosses knew the job was necessary and required and monitored by the German federal grid agency (Bundesnetzagentur). Possibly they even got complaints, if not fined half a million by the agency for mediocre prognosis. (There was one complaint one day when I was still being trained – the grid got to the edge of overload due to a solar data glitch.) — But at that time they didn’t care and were dragging heels in the inevitable German Energiewende. Only recently they seemed to have got a clue: http://www.tagesschau.de/wirtschaft/eon-neuausrichtung-101.html (The German Energieminister cum Vizekanzler plus the Barvarian Ministerpräsident seem to still not get the Energiewende thing. There could be a bad awakening: The company is too big to fail.)

What ever: The first thing to do when they need to save money is to get rid of heads. Anyhow, right before I came they had hired external experts for 10x the money, so this budget was overdrawn – even while my pay was negligible: This is classical automatic dyscalculia of the bean counters. No chance. Move on.

I haven’t looked back and not yet asked the guys how things are doing there today.

—–
BTW right now I should **look for yet another new job**. But I’m thoroughly fed up with wasting brain for the bean counters. I now want to keep it for myself to do private math. Now looking for a 400€+ job as organic pig shepherd. **Seriously!** I’m an expert in throwing pearls to pigs. They love it. And I love them for having fun. Alternatively, I also love gardening. Ideally an apprenticeship in organic farming. Munich West (Allach/Gröbenzell) or Bayerischer Wald.

P.S.: My “customers” were in the lucky (but under appreciated) situation to have fast “feedback from reality”: Every month (or week?) we got a time series of real vs. prognosis. At least my 2 peers and me were discussing it and what needed improvement (Bundesnetzagentur wants 15min precision; some slopes make problem; etc. (BTW watch out doing one year in 15min steps in MS Excel: Around line 10786 you get a rounding error. Srsly.)). So, the result counted (could have), not the method.

The point is: Customer cares about money, and can’t even really handle this (dunno if the budget box-thinking is typical German only). I.e. often even the tactical reasoning is lacking due to oversimplification and backwards clinging. Caring about reality (strategico-logistical thinking) would be too much to expect of the poor suits’ busy brains. (I am now officially giving up on this waste of time. Heck, the planet is burning…)

I mean a networking company isn’t going to design and manufacture their own power plants either, not even in specialized, extraordinary circumstances, as it is demonstrated beyond reasonable doubt by Google’s failure.

However, they could well raise awareness of the fact that our current information technology uses ten billion times more energy than necessary.

What is more, as an important customer and a company with the largest footprint on the web, they are in a perfect position to pressure manufacturers to invest into research to improve energy consumption by several orders of magnitude.

“If you’re alone, why is that?” is a question whose reasonableness can’t, I think, be completely assessed right away. Although it is, now-a-days, pretty easy to find out without asking.

In answering it, I expressed a confidence in JohnRussell40’s better nature that I did not feel. It has been two months; has he nudged any geoengineering discussions away from witlessness?

There was a useful link posted, as I acknowledged there, by royboy61.

The other commenter is of a type best ignored (and I’ve been doing that in this thread too. I’m not sure how John doesn’t see we’d be better off without him). They know enough to sound sciency, but, as you say, they talk nonsense.

“… somehow my answer appeared elsewhere in the thread”. Aha, right above your misplaced comment is another one of mine, which also got misplaced! I was wondering where it went. Looks like some kind of wordpress bug.

To continue the discussion down here: the problem with completely ignoring “sciency-sounding nonsense” is that some other reader who doesn’t immediately perceive it as nonsense may think you are ignoring a legitimate counterargument or question. (I was not sure it was nonsense, since I didn’t take the time to analyze it — it just sounded on first skimming like it was raising objections that made no sense given what your proposal actually was.)

So I’d suggest replying once (in any one comment thread), politely pointing out the basic errors, and then perhaps ignoring that poster after that (if they don’t respond reasonably).

And back to substantive issues with the olivine proposal — if I understood you right, you are skeptical that the wave energy in the most active coastal regions would actually be enough to grind up the olivine.

G.R.L. Cowan’s newspaper comment also linked to this book, available free online: http://www.inference.phy.cam.ac.uk/withouthotair/ . I read its 10-page summary, and it too looks very interesting. It gives and explains numbers for different options for U.K. energy consumption and provision.

”
Ray Pierrehumbert cites an equilibration time between atmosphere and oceans of 500 years in his Section 8.4 (PoPC), and devotes Problems 8.13 and 8.14 to the question.
“

There is actually no good measure of a time constant since the behavior is fat-tailed and calculations of statistical moments such as mean and variance diverge.

The fat-tail comes from the solution of the diffusion equation, where the time is in the denominator of the exponent. If it was in the numerator, you would get a classical damped time constant. But that’s not how the physics works.

As it is, the number could be hundreds or thousands of years depending on what criteria one uses.

The fat-tail It is not in the statistical distribution but in the response function. Diffusion away from a point source is fat-tail, while discharge of a capacitor is thin-tailed, to pick a counter-example. The tail is caused by the nature of the random walk, leading to a slow convergence to any kind of asymptote.. Applying an Ornstein-Uhlenbeck reversion to the mean property is one way to “thin” the tail.

I should probably not be using the term “fat-tailed” to describe this behavior because it is not conventionally applied — after all, this is behavior that was known long before Taleb popularized the term in his Black Swan books

The topic is probably worth talking about more because it is not well understood by laypeople, but is very relevant in describing thermal uptake by the oceans as well as CO2 sequestration. These processes take hundreds of years just because of the role of diffusion and the nature of random walk.

For CO2 sequestration, one of the consensus models is called the BERN model. This model is expressed as a sum of exponentials — a formulation that is also well known as an approximation to the solution of a diffusion equation.

How To Write Math Here:

You need the word 'latex' right after the first dollar sign, and it needs a space after it. Double dollar signs don't work, and other limitations apply, some described here. You can't preview comments here, but I'm happy to fix errors.