Cassava and sorghum are tubers that form the protein base for hundreds of millions of people. But while there’s a great deal of protein in the plant, there’s also cyanide in the plant’s leaves. Whether the leaves are poisonous or not depends partly on how much protein there is – more protein means that the cyanide is less toxic and the plants are safe to eat for man and beast alike. But according to a new study reported in Reuters, higher carbon dioxide (CO2) concentrations means both less protein and more cyanide, a toxic combination.

According to the article, an Australian team grew cassava and sorghum under different CO2 concentrations that approximated the various projected climate disruption scenarios for the rest of this century. What they found was that “the amount of cyanide relative to the amount of protein increases” and that “[a]t double current CO2 levels, the level of toxin was much higher while protein levels fell.” As a result, cassava-dependent communities could be poisoned, especially when experiencing a drought.

The article pointed out a greater worry, however – at high CO2 concentrations, the crop yields fell significantly:

[Monash University researcher Ros Gleadow said] “There’s been this common assumption that plants will always grow better in a high CO2 world. And we’ve now found that these plants grew much worse and had smaller tubers.”

Aerosols like pollution, airborne dust, black carbon particles, even sulfur dioxide (SO2) have many different effects. Black carbon absorbs solar radiation and heats up the air or melts the snow and ice it settles on. Sulfur dioxide cools the planet when blasted into the stratosphere by a volcano, but may heat up the Earth and produce acid rain when located lower in the atmosphere. Airborne dust and pollution increase the rate of cloud formations – except when they decrease the rate instead. Scientists know that clouds and aerosols interact greatly, and since the effects of clouds on climate – and of climate on clouds – remains one of the few major unknowns in climate models, improving scientific understanding of aerosols is similarly critical to improving climate model predictions.

One of the recent problems with aerosols is that satellite measurement-derived estimates of aerosol radiative forcing (hereafter referred to as SDRF, for satellite derived radiative forcing) have differed by from modeled predictions of aerosol RF (MRF, modeled radiative forcing) by up to a factor of two, well outside the margins of error for both measurements and models. Even worse, there was also a statistically-significant difference between two different sets of SDRFs . Scientists haven’t been able to determine why there was such a large difference between and among the SDRFs and between SDRF and MRF until now. A new paper published in the journal Science claims to have not only explained the differences between the different aerosol RFs, but to explain differences between the two satellite-based datasets as well.

According to the paper, the discrepancy is a result of two assumptions made in the process of calculating aerosol RFs. The first assumption made in the calculation of SDRF is that “there is no radiative effect of the aerosols within cloudy sky areas.” Models, on the other hand, don’t make this assumption. The second assumption is that the there was no anthropogenic aerosols prior to 1750 (defined as the start of the industrial era), a false assumption. In addition, there is a third difference between SDRF and MRF that isn’t a difference in starting assumption – the SDRFs don’t have complete earth coverage. The MODIS satellite measurements that are the basis of most calculated SDRFs can’t take measurements over highly reflective terrain like ice and desert, and so significant swaths of the Earth’s surface can’t be observed. Again, the models don’t have this problem.

There are two ways to prove that the different calculated RFs are actually statistically the same – demonstrate that SDRFs can be made equal to the MRFs, or demonstrate that the MRFs can be made equal to the SDRFs. The paper does both. First, by using model data to fill-in the places that the satellites can’t measure and then by changing the initial assumptions used in SDRF calculations, the paper illustrates that the satellite-based RFs are equal to the modeled RFs (marked in blue in the image at right). Then, by changing the model parameters to match the assumptions underlying the SDRFs, the paper illustrates that the MRFs were made to be equal to the satellite-based numbers (marked in red in the image at right). If you notice, the two MODEL lines (Int and Ext) look very similar to the MODIS (Model) line, just as the MODEL (Sat & opt obs) line looks very much the same as MODIS line.

The paper also identifies what specific aerosol is mostly responsible – black carbon, aka soot. As the image shows, there has been a massive increase in the amount of black carbon present in the atmosphere since pre-industrial times (“more than a factor of six”). As a result, the reflectivity of the Earth’s atmosphere has dropped:

The global mean annual average single scattering albedo computed in the model for all aerosols at 0.55 um is 0.986 at pre-industrial conditions and 0.970 at present-day conditions. Thus the aerosol in present times is approximately twice as absorbing as that in pre-industrial conditions.

There are two caveats, however. The first is that the SDRFs are not model independent at this time – model data is used to fill in the parts of the satellite observations that the MODIS instruments can’t detect. This means that, if the models are wildly wrong, then the SDRF calculations are going to be wrong as well, although not as wrong as the models would be alone (the error would be proportional to the area filled in with model data).

The second, and more important, caveat is that the changed assumption about the pre-industrial aerosol levels may not actually be correct. Given that the new assumption also explains differences between two different SDRFs means that the assumption is likely to be correct, but further research will be necessary to test the validity of the new assumption.

All in all, this is a valuable study that will both improve climate modelling and quell some concerns about differences between models and observations of aerosol radiative forcing.

As an energy supply, the methane held in clathrate form under the Arctic, off the coast of Japan and India, and elsewhere around the world hold significant potential. The article says that these deposits are estimated to hold trillions of cubic meters of methane that could, if questions of scale and safety can be worked out, power hundreds of millions of homes for a decade or more. But there are significant problems.

The first is that the methane held in the clathrates are difficult to extract – either the ice has to be melted or the pressure that helps keep the methane locked into the ice must be lowered. The article says that researchers tried the melting method and found it took too much energy, but that the decompression technique appeared to work well, and has been powering an industrial furnace in Siberia for decades.

Which brings us to the second problem. Extracting methane from clathrates on large scales runs the risk of destabilizing the entire deposit, and depending on where the deposit is and how large it is, that could result in underwater landslides that cause tsunamies. Any life close to the “methane burp” would probably be asphyxiated as well. And if the burp was really big, it could produce short-term climate effects around the world – methane is moderately powerful greenhouse gas as compared to CO2.

Some researchers are hoping to extract clathrate a different way, though – by replacing the methane in the ice with CO2. This has the supposed benefit of sequestering the CO2 – but only if you assume that there will never be such a thing as a “CO2 burp” out of a destabilized CO2 clathrate deposit.

I understand the interest in this, and I think additional research is warranted. But industrial scale deployment of a methane clathrate harvesting technology should not be deployed until the risks and potential safety issues have been well documented and are understood.

I wouldn’t expect this to show up in prices yet, given that I don’t think that agricultural commodity prices tend to look too much farther than a couple of years out. The tests were run at CO2 levels that we wouldn’t expect to see for decades, depending on how fossil fuel energy sources change (accelerate, scale back, or stabilize).

It has been my considerable experience that things of this nature tend to show up in prices immediately, especially in the back months. However, if you think that it hasn’t shown up yet, you can always buy sorghum year 2015 options at a good price and if you’re right, you’ll get rich. The window of ag pricing is about 3.5 years, however the market bias(which is sometimes shown in the carry spreads) can be affected by expectations of events and conditions decades in the future. However, since it is a matter of supply and demand, sorghum yields right now are through the roof because of better technology and farming methods. My sorghum buddy traders tell me that there’s so much sorghum out there that the mills are having trouble with storage….kind of like the corn debacle in 1986-1987.

I’d be more concerned with the disruption in the transport and supply of the ag commodities brought to market. This is my real concern, That being said, we’re having a bumper crop in corn, wheat, beans, and oats and the price of most is going down. The exception is soybeans, which China seems to be buying our entire crop….kind of like Russia did to our grain crop in 1973.

I thank you for putting this article out, as now I have added sorghum futures to my screen and will keep a finger on the pulse of that market, Who knows, maybe there will be a good trading opportunity:)

It would be interesting to see if the cyanide market has a reaction due to increased supply.

BTW to forstall carping at the wet chemical method vs the new NDIR “Comparison of these measurements on the basis of old wet chemical methods with the new physical method (NDIR) on sea and land reveals a systematic analysis difference of about minus 10 ppm.”

BTW I also wonder how any of you manage to explain the discrepancy between higher levels of CO2 and cooling anyway…UNPREDICTED by the IPCC scenarios.

So….given the inability to predict for the short term, and the lack of effect of elevated levels of CO2 on temperature since 2002, why are we doing this song and dance?

Did you know that in spite of the derivatives market having crashed the banks, they are planning to trade both futures and derivatives in the planned CO2 market.

While the worst effects were encountered at 700ppm, the article says that protein/cyanide proportion tilted towards the latter as the CO2 concentration went up. The article also noted that drought conditions made the issue worse.

Plants are complicated things dependent on a great many variables. Any stress is liable to have biochemical/photosynthetic responses. Multiple stresses simultaneously often produce effects greater than the sum of the individual stresses’ symptoms. Respiration and transpiration changes cascade through the photosynthetic process. Stresses and environmental conditions are also liable to affect nutrient uptake, or even lock certain nutrients out of the plant completely (there may be plenty of magnesium in the soil, but the root mass refuses it, even when the plant shows obvious magnesium deficiency).

What i’m saying, Judy, is while the study does not definitely foretell the future, it’s best that we examine these issues (as the researches did, in a gradient) so that we don’t get caught with our proverbial pants down.

That’s weather, not climate. Come back to me when you’ve got a 15 year cooling trend. Oh, that’s right, there isn’t one.
Temperature is expected to rise exponentially, not linearly, and slow points are to be expected. Entire papers have been devoted to explaining this. You should read a few (here’s a good starting point: http://www.scholarsandrogues.com/2009/05/19/the-weekly-carboholic-climate-disruption-lowering-juneau-sea-level/#cool)
Lest you think 1 and 2 were too dismissive, there’s something called endpoint sensitivity. January 2002 was the third hottest monthly measurement in that period, following January 2007 and March 2002. All of 1999, 2000, and 2001 were colder than that. So the starting point, being the start of an El Nino, was unusually hot while 2008 and up through April 2009 was unusually cold due to La Nina effects. So the careful choice of the endpoints produced a short-term cooling trend in long-term data that shows nothing of the sort. Weather vs. climate again.
the error of that trend is nearly double the trend itself because there are so many trend outliers and the number of independent samples is so low (again, because this is essentially weather instead of climate).

In other words, the supposedly massive cooling trend you, Monckton (since one of his presentations is where I first saw this terrible graph), and D’Aleo are trumpeting here is statistical bullshit.

As for your wet chemistry/EG Beck stuff, you never answered my criticisms of it at Care2. Here they are, repeated again in hopes that you’ll either address them or realize that you’ve been beaten – again – and go away until next week’s Carbo:

From one of Keeling’s papers, regarding atmosphere samples being tested two different ways:

This Scandinavian program, started by Rossby in 1954, had been a major factor in triggering interest in measuring CO2 during the IGY. Nevertheless it was quietly abandoned after the meeting, when the reported range in concentrations, 150–450 ppm, was seen to reflect large errors. 3

3. At two stations in Finland, samples collected by station personnel had been sent to Scripps. These samples yielded nearly the same concentrations as those measured at Mauna Loa Observatory, proving that the errors in the Scandinavian program were mainly analytical rather than due to variable CO2 in the air being sampled.

Gee, samples using the wet chemistry technique showed a range of 3x, while the samples tested at Scripps were nearly the same as those measured in the air over Mauna Loa. The key line here, Judy, in case you’re trying desperately to ignore it, is “the errors in the Scandinavian program were mainly analytical rather than due to variable CO2 in the air being sampled.”

In addition, can you explain how CO2 rose and fell so fast as a result of temperature (the preferred “source” of the 180 year variability, according to the generator of this disproven hypothesis, EG Beck)? That would require the total amount of CO2 in the air rising and falling by 30-50% in a couple of years. What’s the mechanism behind that? It can’t be heat, because the 0.4 degrees of heating observed prior to 1942 (the last time it was supposedly over 400 ppm CO2) would be responsible for a 30-50% increase in total atmospheric carbon – but an additional 0.6 degrees of heating since the 1940s is responsible for only a 36% increase. So what’s keeping all that extra carbon from coming out of the ocean? Bummer that CO2 concentrations in the ocean are rising dramatically – I guess the CO2 can’t be going there, or coming from there either. So much for the thermal idea

There is only one plausible mechanism exists to drive CO2 up that fast, but if we assume volcanoes could drive the CO2 up that fast, then why wasn’t there a similar pulse of CO2 after Pinatubo, or el Chinchon? That much additional CO2 would also radically change the isotope ratio between C12 and C13, something that WOULD be detected in tree rings (and would have error that is completely independent of a straight CO2 concentration measurement from tree rings). And yet the detected C12/C13 isotope ratio has changed as would be expected if the source was from the burning of fossil fuels and the gradual accumulation of those emissions in the atmosphere.

And there’s no plausible mechanism to suck CO2 out of the atmosphere that fast – a 30-50% increase in oceanic CO2 would have turned the oceans into Perrier and pickled most living things in the oceans by now.

Oh, really. You dismiss the fact that the temperature has been dropping in spite of rising CO2 claiming 15 years is a must. Yet you manage to overlook the warming of the 1920-1940 period followed by cooling until 1977 even though CO2 hit 400ppm in 1942.

“Accurate chemical CO2-gas analysis of air since 1800 show a
different trend compared to the literature of climate change actually
published. From 1829, the concentration of carbon dioxide of air in
the northern hemisphere decreased from a value of e.g. 400 ppm up to
1900 to less than 300 ppm then rising till 1942 to more than 400 ppm.
After that maximum, it fell to e.g. 350 ppm and rose again till today
(2006) to 380 ppm.”http://www.biomind.de/nogreenhouse/daten/AIGnewsNov06.pdf

No one rational would jump on the CO2 restriction bandwagon seeing how both currents are far more involved in temperature change than a minor trace gas and are most likely the main cause of those minor changes.

Warmists are so slippery. I’d bet on Beck before Mann &Co anytime., since that is the obvious reference for realclimate’s “How to make it appear as if the Medieval times were warmer than today, even if all scientific studies come to the opposite conclusion.”

Not “all” studies…only those Mann and his cronies were connected with. The Chinese have evidence it was just as warm there as in Europe during that period.

As for Realclimate…a joke perpetrated by a public relations agency trying desperately to make Mann’s work look legitimate..

Again, Judy, there are NO – zero, zilch, null, none – mechanisms by which CO2 can rise and fall as fast as Beck says. Volcanoes have been disproven. Thermal gain/loss from the ocean has been disproven. And those are the ONLY two mechanisms that even had a chance. Therefore, Beck. Is. Wrong.

Beck lied, Judy, and the folks at RC pointed it out, nothing more. Look at the image below:

Notice that he’s got a reasonable ~1500 year cycle from -400 to 1200. Except there’s a problem – he’s got two complete and beautiful cycles there, but with a break in the horizontal axis – and a change in the scale. Before the break, the tick marks on the horizontal axis are every 400 years – after the axis, the tick marks are every 200 years. The axis makes Beck’s claim look reasonable, but it’s not. -400 + 1600 = 1200. No problem. But simple addition says that 1200 + 1500 = 2700. Which means that the modern “climate optimum” should happen in about 700-800 years. Oh, and look at that – that means today’s “climate optimum” is actually supposed to be at the MINIMUM of the cycle. Oops.

Are you denying basic arithmetic now, Judy?

Alternatively, maybe Beck’s right and we are actually in one of his minimums. If so, then we’re REALLY hosed, because heating is going to accelerate dramatically if that’s the case. So, if you’re unwilling to deny basic arithmetic, I guess your only choice now is to transition to being a hard-core AGW supporter.

I’m not worried about that 1500 year cycle of his, though. While 1500 year cycles have happened previously (yes, there’s a nugget of fact in those), they haven’t been observed in 10,000 years – since about the end of the last ice age.

In regards to sorghum as an investment, it’s probably worth looking into considering the rise in diagnosing celiac (sp?) disease. Sorghum is the only grain that people with the disease can digest because it doesn’t have any glutton.

The amazing part about this nonsense is the nonsense that gets written. The Earth and the oceans both respond to warming by giving off CO2. Warm water hold less gas and warmer earth allows biological processes to proceed more rapidly.

The ice core records show that FIRST IT WARMS …AND THEN CO2 RISES.

Trying to pretend that this period is unique in Earth history is beyond belief, especially since it is now cooling.
Evidence for a solar signature in 20th-century temperature

We analyze temperature data from meteorological stations in the USA (six climatic regions, 153 stations), Europe (44 stations, considered as one climatic region) and Australia (preliminary, five stations). We select stations with long, homogeneous series of daily minimum temperatures (covering most of the 20th century, with few or no gaps).We find that station data are well correlated over distances in the order of a thousand kilometres. When an average is calculated for each climatic region, we find well characterized mean curves with strong variability in the 3–15-year period range and a superimposed decadal to centennial (or ‘secular’) trend consisting of a small number of linear segments separated by rather sharp changes in slope.

Our overall curve for the USA rises sharply from 1910 to 1940, then decreases until 1980 and rises sharply again since then. The minima around 1920 and 1980 have similar values, and so do the maxima around 1935 and 2000; the range between minima and maxima is 1.3 °C. The European mean curve is quite different, and can be described as a step-like function with zero slope and a ~1 8°C jump occurring in less than two years around 1987. Also notable is a strong (cold) minimum in 1940. Both the USA and the European mean curves are rather different from the corresponding curves illustrated in the 2007 IPCC report.We then estimate the long-term behaviour of the higher frequencies (disturbances) of the temperature series by calculating the mean-squared interannual variations or the ‘lifetime’ (i.e. the mean duration of temperature disturbances) of the data series.We find that the resulting curves correlate remarkably well at the longer periods, within and between regions. The secular trend of all of these curves is similar (an S-shaped pattern), with a rise from 1900 to 1950, a decrease from 1950 to 1975, and a subsequent (small) increase. This trend is the same as that found for a number of solar indices, such as sunspot number or magnetic field components in any observatory. We conclude that significant solar forcing is present in temperature disturbances in the areas we analyzed and conjecture that this should be a global feature.

Many readers have commented about their experiences at Real Climate with posts being deleted and being run over roughshod by hostile comments there. I was sent this YouTube link by a WUWT web affiliate, and as I was watching it, it occurred to me that the phrase “tightly controlled” really describes the Real Climate methodology.

“I should add that I’ve experienced the same thing at RC, valid questions I have posed have been wholesale deleted personally by Gavin Schmidt. I’ve kept a record and screencaps of such things, I would suggest that you all do the same.

Deleting rude comments or comments that are badly OT or inflaming is one thing, but when you start deleting valid scientific questions posed by people in your circle of interest, it doesn’t take long for all of those individually affected to start comparing notes.

RC seems to have a small following of the same people that make up a core group, but when you examine the web statistics, it seems obvious that such a strategy is failing their primary mission of reaching out to people:”

May I suggest that this group reminds me of the Communist party cells in the 1950s which were found to be made up of 95% FBI agents.

You see, this is what people schooled in logic call an ad hominem attack. This particular logical fallacy is usually pulled out when someone is losing a debate because he or she can’t address the facts and arguments at hand and chooses to attempt to destroy the credibility of the person making the argument instead.

I am neither criticizing nor supporting RC, Judy, but the fact of the matter is that they caught Beck lying in his graph, as I illustrated above. Prove to me that the lovely picture posted above is bullshit and address the arguments I’ve made that demolish your so-called science. Until then you’re just digging yourself in deeper and deeper and deeper.

As to your prior post, trying to deny that this period is unique in history is foolish. Humanity has a massive amount of control over the environment, and the climate is no different. And you even believe that, Judy. I’m still waiting for an answer how you can logically believe that the US military can control the climate and weather of enemy nations but how humanity as a whole lacks the power to do the same to the entire globe.

Could we stick to this argument without a diversion into chemtrails…which are amply documented, btw.
Speaking of logic, I bow to greater the authority of Dr. Evans below.

There is No Evidence

By Dr. David Evans

Let’s break down the case for human-caused global warming logically:

1) There is plenty of evidence that global warming has been occurring recently.
2) There is ample evidence that carbon emissions causes warming and that the level of atmospheric carbon dioxide is increasing.
3) But there is no evidence that carbon dioxide emissions are the main cause of the recent global warming.

The alarmists focus you entirely on the first two points, to distract you from the third. The public is increasingly aware of this misdirection. Yes, every emitted molecule of carbon dioxide (CO2) causes some warming – but the crucial question is how much warming do the CO2 emissions cause? If atmospheric CO2 levels doubled, would temperatures rise by 0.1, 1.0, or by 10.0C?

We go through the usual “evidence” offered by alarmists, and show that in each case either it:

� Is not evidence about what causes global warming. Proof that global warming occurred is not proof that CO2 was mainly responsible.
� Is not empirical evidence; that is, it is not independent of theory. In particular models are theory, not evidence.
� Says nothing about how much the temperature would rise for a given rise in CO2 levels.

Despite spending $50bn over the last 20 years looking for evidence of point (3) above, the alarmists have found none. In two instances they expected to find it, but in both cases they found only evidence of the opposite – and they have kept awfully quiet about those cases. If they just had some evidence of (3) they could just tell us what it was and end the debate.

We note that there used to be some supporting evidence, but better data later reversed that evidence. Instead there are now at least three independent pieces of evidence that the temperature rises predicted by the IPCC due to carbon dioxide emissions are exaggerated by a factor of between 2 and 10, primarily due to the assumption of overly positive water vapor feedback in the climate models. Finally, we discuss some examples of what would constitute evidence. The evidence must of course be empirical, meaning that it is independent of theory.

Typical Evidence
Typical Alarmist Offerings of “Evidence”: Polar Bears, Glaciers, Arctic Melt, Antarctic Ice Shelves, Storms, Droughts, Fires, Malaria, Snow Melt on Mt Kilimanjaro, Rising Sea Levels, Ocean Warming, Urban Heat Island Effect. Although each of these issues may say something about whether or not global warming is or was occurring, none of them say anything about the causes of global warming. It would make no difference to these issues if the recent global warming was caused by CO2 or by aliens heating the planet with ray guns.”
there is more athttp://www.icecap.us/

At Care2, you’ve argued that a supposed ELF radiowave weapon could manipulate climate around the world on a local and regional scale by linking to the idea via the Center for Research on Globalization. That means that you believe that human beings can manipulate climate. Yet you continue to claim that human beings can’t manipulate climate. That is a logically untenable position – either people can, or can’t, manipulate climate. You can’t have one without the other, Judy.

And there’s a reason Slammy and I think you’re a paid sock puppet for someone like Icecap (essentially what you accused me of being a month or so ago) – you post so much at Care2 alone that it would be a full-time job. And it’s all links from other places like GlobalResearch, Icecap, Wattsupwiththat, etc. No real information, just cutting and pasting. Spam, in other words.

I could have you banned here, Judy. But I’m not going to because you’re so bad at talking the anti-global warming line that you illustrate just how faulty the denier logic and science actually is. And you give your likely employer, and certainly your fellow deniers, a really bad name.

Far be it for me to stop you from shooting yourself in the foot. Repeatedly.

Interesting research, that CERN discussion. It will be interesting to see what happens with it. Even if it turns up a physical mechanism, though, that’s not the smoking gun that Watts or you seem to think it is – the 30 year delay makes proving a valid mechanism for the delay a challenge. But if CERN establishes a physical mechanism, then I feel certain that the models will include it post-haste.

I’ve been reading it more, and there are some graphs that make me go “Hmmmm….” and at least one large and insufficiently nuanced statement, namely “aerosols cause brighter clouds.” This is only partly true – aerosols CAN cause longer lived, brighter clouds, but recent research (as in a just few months ago, and thus it’s entirely possible that it antedates this presentation) shows that some aerosols don’t, in fact, cause brighter clouds. And other aerosols can cause more clouds in some cases but fewer clouds in other cases.

Even so, though, it’ll be a very interesting experiment. Good for CERN for giving it a shot and seeing what happens. Unfortunately, we won’t have complete results until 2013 (preliminary results late this year and next year, however).

Whether you believe in the research in global warming might not matter. We need to replace fossil fuels especially coal. Coal plants, besides emitting carbon dioxide also emit other toxic chemicals that kill at least 30,000 people each year.

They also emit radiation that’s contained within the coal they burn. You can actually get more energy from coal if you use the uranium and thorium fuels contained in them in a nuclear reactor than from actually burning the coal in coal plants.

If we built the nuclear plants that we were going to 30 years ago, then today we wouldn’t be emitting any carbon dioxide from fossil fuels. That’s ZERO CO2!! Guess what stopped us from building them? here’s a hint: AMORY LOVINS and President Carter. Amory Lovins convinced Carter that we should use coal instead. Now who do you think Amory Lovins works for?

We could use LiFTR nuclear power plants to provide all our energy for over a million years for the whole planet and that’s means enough energy everyone in the world wants, clean non-polluting green inexpensive energy. You can’t do that with solar energy or wind energy or the ridiculous idea of “carbon sequestration”. Why do so many scientists take the idea of pumping monumental huge CO2 into the ground or oceans?! Seriously! That’s science fiction.

Here are some podcast shows called the Atomic Show that have some interviews/talks with some of the people behind LiFTR reactors and other topics:

And this one is about the Steven Chu confirmations hearing. The Obama administration is very heavily influenced by that idiot Amory Lovins and his backers, i.e the fossil fuel industry. Of course he and his administration would be since the CFR absolutely loves Amory Lovins and since the CFR holds every position in the executive branch and in Congress they are in a position to set the policy in the administration, in favor for their fossil fuel members.The Atomic Show #122 – Steven Chu – Confirmation Hearings for Secretary of Energy

There are over 100 episodes to educate yourself about the realities of nuclear power and dispel the myths.