30 August 2005

I tend to view the current oil price increases as a bit of a red herring to distract us from the state of natural gas in North America. Methane is the cleanest burning carbon fuel we use, and it is used for residential and industrial heat, the chemical industry for hydrogen, and burned in turbines to produce electricity. Due to underdevelopment of American gas fields, Canadian exports to the USA have exploded, as has the price. The recent windfall to Alberta that I talked about the other day is about 2/3rds due to gas, and only 1/3rd from oil. At current rates, Alberta will exhaust its conventional gas fields by 2012 -- literally around the corner. Coal-bed methane is expected to provide an out. However, no one really knows how much methane can be economically extracted. Gasified coal is another potential source, and another unknown.

Critical to the refining of bitumen into fungible fuel from tar sands is the use of process steam. Currently, this steam is produced by burning natural gas. As such, a barrel of oil from bitumen requires approximately 1.35 Mbtu of natural gas from Syncrude reports. As the price of natural gas accelerates, so does the price of tar sand oil. This feedback cycle has the oil and gas industry publicily worried about the rising price of a million btus.

The obvious solution that has been presented by many people is the construction of CANDU nuclear steam plants up in Fort McMurray. The refineries are largely sitting in one convenient row. Combined heat and power production from nuclear plants has the potential to be very efficient and profitable. Since Alberta lacks cheap hydroelectric power, nuclear may provide the cheapest means for baseload. The steam left-over from the turbines can be used for bitumen processing, and is essentially free energy to be sold to the oil patch companies.

Oilsand natural gas consumption by 2015 -- by which time conventional reserves should be gone -- is estimated at between 1.2 and 1.8 billion cubic feet per year by the National Energy Board. This works out to about 40 - 60 MW of continuous heat production from a nuclear plant. Generally nuclear plants heat-to-electricity efficiency is only around 30 %, meaning that 70 % is potentially available as heat. It is reasonable to expect that only a small plant would be necessary to provide the needed steam output.

There are a number of issues to be expected with such a project.

NIMBY syndrome. With credit to The Onion, nuclear fission is in dire need of a rebranding. I think even Fort McMurray is not immune to this issue. The even bigger problem is how to store the waste. The geology of the Canadian Shield is ideal for the storage of nuclear waste, but the location remains extremely contentious.

Steam temperature. Nuclear plants usually operate a core temperature around 800oC. From what I can gather, the steam temperature requirements for bitumen are all relatively low, so this should not be a big issue for a CHP plant.

Water. Right now the steam process is consuming large amounts of water from the Athabasca River. The need for cooling of the nuclear plants will consume even more water, although a nuclear steam plant will by definition not be creating so much waste heat. The solution lies in further closing of the bitumen refinement process, so that the water is recycled. This research is ongoing.

Getting nuclear engineers to live in Fort McMurray. It's hard enough to find roughnecks to work up there for $70,000 per year. More discriminating and better educated engineers will command very high salaries. Better automation is one possible solution. The nuclear industry is already deeply involved in robotics. The other half of the fix is to improve the quality of life on the oilpatch, something the provincial government should be doing anyway to ensure there's a sufficient workforce for the industry. A good portion of oil revenues should be routed back into town improvements (assuming they can find construction workers).

There are probably others as well. Right now I would not consider myself well educated on this topic, but I will continue to explore it in the future.

29 August 2005

Thanks to sky high prices for oil and natural gas, the province of Alberta is looking at a surplus of $2.8 to $7 billion dollars this year. The province with the next largest surplus is British Columbia at $220 million. The federal government itself is running an even bigger surplus, but that is another story. The Klein government is concerned that the other provinces are eying Alberta's surplus and looking for a cut. Since Alberta has no public debt anymore, there's not a lot to do with the money aside from spend it on public entitlements or slash income taxes to 0 %.

What Alberta should do is take this opportunity to diversify its energy holdings and economy. While gas and tar sands development may be roaring now, there is no guarantee that this will hold indefinitely. Most of Alberta's gas reserves were flared off in the first half of the 20th century. As a result, conventional gas will be nearly gone in four years. Given the effort around the globe to get away from oil as a carbon fuel, are the tar sands enough? This question is especially pertinent given that natural gas is burned for steam refining of bitumen. It's also worth pointing out that electricity prices in Alberta are some of the highest in the country, due to a lack of cheap hydro. Right now British Columbia gouges Alberta by selling it electricity during peak daylight hours, and then buys power back from Alberta coal plants over the night in order to preserve the water in its reservoirs.

Alberta should take this windfall as an opportunity to invest in the future, rather than act as Saudi Arabia Mk. 2. Another historical analogy would be how the Spanish Empire squandered the bullion it captured from the Americas. They burned brightly for awhile, only to be fall back and be eclipsed by the English and Dutch. Eventually Alberta is going to want other, robust sectors in its economy, and access to wind, hydro, and ocean power from its neighbours, since its neighbors certainly won't be sharing.

I think that Alberta should push for something like a national energy board, largely controlled by the provinces but with a seat for the federal government as well. A proportion of all energy revenues would be managed by the board. For example, not only would Alberta's oil and gas revenues go into the pot, but so would uranium exports from Saskatchewan and the NWT, hydro-electricity from BC, Manitoba and Quebec, and the offshore oil and gas of the Maritimes. This board would be the type of federalist, decentralized power structure that Albertan's crave, and no doubt headquartered in Calgary.

Create research entitlements. Currently any research into energy systems is funded on the basis of its scientific merit by NSERC or CFI. This is not a particularly good metric for an industry where manufacturing and management techniques can have a bigger impact than R&D. Given the critical importance of energy to the Canadian economy, it's long past time to create an agency specifically to fund energy research projects. An example would be Canadian Institute for Health Research role in medicine.

Fund energy infrastructure projects. While production capacity can pay for itself, there is often little money to be made in delivery. For example, the dilapidated state of our electricity grid is in dire need of a cash infusion. Construction of new direct current transmission lines could improve the ability of the nation to switch power from one end to the other. In the case of Alberta, they could certainly look to benefit from British Columbia and Manitoba's hydroelectric power. The most economic solution to the gas shortfall may be a LNG terminal built in Prince Rupert and a pipeline built to Edmonton.

Fund public services infrastructure. Highways, hospitals, and the like all carry a high capital cost but once built, can provide effective service for a lifetime.

The real question is if the current Alberta provincial government can overcome the inertia current 'No' policy to take a leadership roll in the country. The current cabinet is, frankly, stolid. It would probably take the election of Kevin Taft's Liberals, or a new more dynamic and ambitious Conservative leader following Klein's retirement to push through anything like I've detailed.

27 August 2005

After taking a look at the description and artist's impression I was certainly curious. There are several obvious problems. First, the artist's impression shows that the mirrors are self-shading each other, reducing the available irradiance. I also had serious concerns that the system, being horizontally mounted, would have inferior performance outside of the summer solar noon.

I decided to model the system at solar noon on June 21st and December 21st in Pasadena, CA -- latitude 35^ North. This corresponds with the sun being at 10.5^ and 57.5^ to the horizontal on the summer and winter solstice, respectively.

I made the following assumptions about the system:

It consists of 25, 254 x 254 mm flat Aluminium mirrors, tightly packed. This corresponds to a total horizontal area of 1.613 m^2.

The collector was slightly larger than an individual mirror, at 280 x 280 mm. This corresponds to an area ratio between the mirrors and collector of 20.56 times.

Incident radiation was 1000 W/m^2 for both dates.

Aluminium has a broadband reflectance of 0.9 over the solar spectrum. No anti-reflection coating is used, due to coat, broad-band nature of the light, and the high variation in incident angle.

With an area of 1.613 m^2 and irradiance of 1000 W/m^2 the mirrors, if totally flat, will see 1586 W on the summer solstice and 866 W on the winter solstice. After simulation in Zemax, I found the actual power on the collector to be 1224 W for the summer and 804 W for the winter. Given their claim of 200 W output, assuming 1000 W/m^2 irradiance, discounting atmospheric scattering, on the best hour of the best day of the year the PV unit would have to have an efficiency of 16.4 % to achieve this. The actual insolation is more like 850 W/m^2 for the Pasadena region at that time.

The summer configuration has a ratio of 0.77 between the power on the mirrors and that on the collector, giving an overall concentration ratio of 15.86 x. This is well below the 25 x quoted by EnergyInnovations. It is possible to increase the concentration ratio by shrinking the collector to 254 x 254 mm, but at the expense of about 10 % of the overall power. The following image shows the irradiance map on the collector, with the red box measuring 254 x 254 mm (the size of one mirror). At solar noon in the summer, the sun is nearly directly overhead and the unit does a good job of evenly illuminating the collector.

In the winter, the system does better: the ratio is 0.93. This is an interesting result, but it can be explained by examining the mirror configuration. Here is a rendering of the system in its winter configuration:

As you can see, while there is significant shading of the rear mirrors, the overall shape of the system is no longer particularly flat. Instead it forms a curve, to catch more of the sun's rays, improving the overall performance. This result was somewhat surprising.

However, this is not necessarily a positive result. For one thing, the horizontal angle greatly decreases the irradiance available in the winter. For example, a flat plate PV system set at 35^ to the horizontal would see 1470 W on the summer solstice and 1490 W on the winter at the given 1000 W/m^2 intensity. At high angles of incidence, the horizontal layout only has a narrow cross-section to the sun, greatly reducing winter performance. Worse, due to the high angles, the presented cross-section of a mirror looks less like a rectangle and more like a trapezoid. The narrow cross-section results in uneven irradiance on the collector:

As you can see, a large part of the collector isn't receiving light. This is a fundemental issue with the use of flat mirrors. Partial shading of solar cells tends to decrease the performance of the unit as a whole, by creating a leakage current in the shading region. When electron-hole pairs aren't being actively created, the dispersion layer between the p and n-doped semiconductors is not as strong. The following journal article goes into some detail on the performance issues:

Note that the unit will have the same problem in the morning and early evening. This uneven illumination of the PV unit will probably shorten its lifespan, although they have added a fan to try and reduce the temperature gradients.

Of course, this totally ignores the effect of diffuse light, which as I noted before, cannot be effectively concentrated by mirrors. Clouds, smog, haze; all will have a ruinous effect on the performance of this system. Note that in EnergyInnovations' Frequently Asked Questions, there isn't a single mention of the world cloud.

After doing this analysis, I can't help but be cynical. It seems to me that this device is being designed to maximize its peak power output (Wp) rating, not its annual capacity factor (kWh/year). The system will suffer from performance losses due to cloud cover, the horizontal layout, and current leakage from uneven photovoltaic irradiance. Why would someone design a system optimized for peak output? Aside from the obvious marketing value, there is another reason if you are familiar with California solar subsidies. California's PV subsidy program has a loophole in it big enough to drive the governator's Hummer through: the subsidy is paid out at $2.80 per peak Watt. The actual annual power production is not considered, because net metering is not required. This has lead to a number of absurd installations on homes, like where the panels are installed on North facing roofs or in the shadow of a large tree. This is in stark contrast to German subsidies, which are for the real power produced. The big question would seem to be, will the new Million roofs program fix this loophole? So far, I haven't been able to find out.

24 August 2005

Typically, electrical power is considerably cheaper in the dead of night than during peak demand during the day. This is due to the very simple reason that demand is much higher during the day, and spare capacity must be built to accomodate it. We can see this clearly from a graph of California's energy demand:

In deregulated electricity markets, there is the possibility to use electricity storage as a means to buy low and sell high -- known as arbitrage. There are a number of different schemes designed for utility scale power storage. The oldest power storage scheme is probably pump hydro, where a volume of water is pumped back and forth between two reservoirs. Another system stores energy as air under pressure in an underground aquifer or salt cavern. These systems suffer from the disadvantage that they require geographical features in order to function.

The other option is to pursue technological solutions. I have already detailed the possibility of using large numbers of plug-in hybrids to do voltage regulation. However, static installations are also possible, such as flywheels or flow batteries. With these systems, since they are not fulfilling the energy demand for transportation, the round-trip efficiency of the storage method becomes paramount. Hydro-pump systems are typically around 80 %.

Of all the electricity storage systems, I feel that flow batteries are the most promising. There are a number of different flow battery electrolytes: Zinc-Bromine, Vanadium redox, Sodium Poly-sulfide Bromide. The Vanadium system appears to be the most popular commercial system at the moment. It was invented in Australia:

The power density of flow batteries is similar to lead-acid batteries. The true advantage of the flow battery lies in that it has a high round-trip efficiency and it separates energy storage capacity from maximum power. They also have very long lifetimes -- only the pumps and the membranes can wear out. The membrane is much simpler than that used in, say, a PEM fuel cell.

The biggest player in the North American flow battery market is VRB Power of Vancouver:

We can see from their numbers that the round trip efficiency is approximately 75 %. It's worth noting that if they could be directly connected to a DC rather than AC distribution grid, this number would be higher. The capital cost of the system is very high -- they estimate that it is approximately $325,000/MWh. However, maintenance and operating costs are claimed to be extremely low, at only $1/MWh.

The real question is, are these systems a good investment? Let's examine the numbers.

California's energy price fluctuates greatly day to day. Using the prices for Palo Verde today, we find that peak power sold for an average of $90/MWh and off-peak for $60/MWh. Given the return rate of 0.75, we see that a Vanadium redox battery could make a profit on the order $10/MWh. This in itself is not much. However, the VRB can also provide excellent ancillary services: voltage regulation and spinning reserve. The value of ancillary services can vary wildly; I will very conservatively estimate them at $15/MWh. Thus, $10 + $15 - $1 = $24 can be generated each day by a VRB system per MWh of storage.

Given the current capital cost, that works out to a payback period of 37 years. Clearly, the system is not a good investment for the California grid at this time. However, the technology is still at an early stage of development and there is no manufacturing efficiency of scale benefits at this point. With an order of magnitude reduction in the costs of a flow battery, they would suddenly become a very attractive investment, with an annual rate of return of about 25%!

Whether or not electricity arbitrage systems will become cost effective in the future probably depends on where you see alternate energy going. If you think the grid will become wind heavy, the demand for voltage regulation will greatly increase and the profitability of these systems will improve. On the other hand, the introduction of widespread solar might depress peak power prices, making the price more uniform and causing the round trip efficiency of paramount importance. Of course, if solar becomes dirt cheap, power will be cheapest during the day and one will again be able to conduct arbitrage, buying at noon and selling at dusk. It does appear clear that the introduction of 'renewable', intermittent sources of electricity will increase the need for a cost effective means of storage that is capable of providing ancillary services.

23 August 2005

My (attempted) post on entropy analysis elucidated the need for clear, concise communication. For a scientist or engineer, communication skills are paramount. They rank higher than your math or computer skills, and higher even than your creativity. You can be the most brilliant scientist in the world, but if you can't detail the consequences of your results, your accomplishments will come to naught. Physical sciences have become way more vast and complicated than was fifty years ago. As a result, students, and the public at large, have to learn much more, much more efficiently.

Science, like most professional services, has developed its own lexicon. My definition of work and energy is very different from that of the public at large. I notice for example, I used the term phonon when talking about Drude-model metals. How many of my readers, even the most technically inclined ones, know what a phonon is? @MonkeySign probably knows, since he is into photovoltaics, but everyone else probably had to head off to Wikipedia.

Mathematics, on the other hand, is a totally different language from English, and science incorporates it. This can make scientific journals even less comprehensible to the average person than military acronyms or legal jargon. In order to talk about science to normal people, we have to use some numbers, but we also have to be very cautious about when we use them.

In order to continue, I would present an example of what I consider a perfect example of how to talk to and inspire Ph.D.'s and normal people, at the same time. It is a famous speech by famous physicist, Richard Feynman. Feynman is famous, among other things, for reforming textbook selection for California public schools. Please take the time to read the link. Keep in mind, this is the transcript of a speech, not a massaged literary piece:

It's not important that many of the details of Feynman's talk are wrong. We certainly aren't moving towards nanofiche storage techniques. What matters is the concept, and that he was able to articulate it so clearly. What's more impressive is he gave the speech in 1959.

There are a number of success stories too. The focused ion beam (FIB) machine is used for nano-scale milling, and he accurately described it. He personally discovered Helium superfluidity, and by extension created a solution to the lubrication problem.

Nanotechnology didn't explode in the way he foresaw -- the supporting cast wasn't ready yet. However, practical nanotechnology development is beginning as computers and instrumentation improve.

The field of 'renewable' energy has never had a voice like Feynman to light a path. If it did, we wouldn't be using such a clumsy term to describe directly and indirectly derived solar energy. Renewable solutions have slowly percolated out of the ground at different points to form some gelatinous, heterogenous concept. A diffusion process like this is most likely to find the optimum solution, rather than some local minima. Science does eat its weakest children, so the bad concepts will eventually be filtered out. Unfortunately, diffusion is slow, and we need someone to stir the pot every once and awhile.

22 August 2005

This is the standard spectrum, at 1000 W/m^2 irradiance, used in the testing of solar energy devices -- namely photovoltaic panels. It could also be useful for modeling the spectrally selective materials used for solar thermal applications. They provide the data in a number of formats for modeling purposes. I'm probably the only person interested in this stuff but there it is, nonetheless. An entire report on the atmospheric physics used for the model is available here:

ETR appears to be the extra-terresterial radiation (totals to 1356 W/m2), tilt is a surface at 48.81o to the horizon (total 1000 W/m2). The Tilt numbers are what you would normally apply to a PV cell to determine its peak power output. The direct and circumsolar irradiance appears to be the irradiance on a horizontal surface directly illuminated plus the diffuse light that is scattered off a 'clear' atmosphere (total 887.65 W/m2). I'm not really sure however because the two data sets don't seem to be consistant. Here's a plot of the tilt data, showing the Si exciton peak at 1.12 eV = 1107 nm:Silicon can't absorb the light to the right of the green line -- it doesn't have enough energy to push an electron from the valence band to the conduction band. It's only 148.7 W/m2 lost however. Silicon can absorb light to the left, but only in 1.1 eV chunks. High energy photons (E > 2.2 eV) can actually generate more than one exciton, however. If we look at the actual number of photons per wavelength we get the following:If we assume every photon with a wavelength below 1107 nm can create one (and only one) 1.1 eV exciton-polariton then the maximum power a Si-photovoltaic cell can absorb is 451 W/m2, or 45 % efficiency. Some people may recognize this as the classic Shockley result, although I derived it numerically in about 15 minutes. With current commercial Si cells running around 12 - 14 % efficiency, you can see that there is a long way to improve even with this most basic design. The actual theorectical (entropy) limits for a solar device are about 95 %. No one is going to get there but it does show you the great potential in solar power, even if it is overly expensive at the moment.

Royal Dutch Marines landing on Hans Island in the far north came under attack by polar bears with frickin' laser beams attached to their heads. The marines returned fire and killed all the attacking polar bears. Two marines were lightly wounded when a baby seal sucide bomber detonated near them. The animals were apparently trained to picket the contested territory by Canadian Rangers who found the islet just "too damn cold."

One of the most amusing territorial disputes of all time is underway over a tiny speck of rock between Ellesmere Island and Greenland. The border is somewhat fuzzy and Hans Island lies in the undefined zone.

Needless to say Denmark and Canada are unlikely to have a physical confrontation over Hans Island. The underlying reason for the bickering is the potential for major fossil fuel and mineralogical deposits in the region. They may become economical to exploit with an onset in global warming.

While global warming is only estimated to cause an average increase of 5 ^C over the next century, the poles are expected to warm faster than the rest of the planet. Snow and ice have a high albedo (reflectivity). As they melt, the underlying ground absorbs more sunlight, warms up, melts more snow, etc. The result is a positive feedback cycle. This effect is already clearly being seen in satellite imagery of the artic.

In addition to unlocking resources, the melting of sea pack ice could result in an opening of the Northwest Passage. The Northwest passage is a sea route between the Atlantic and Pacific Oceans. It passes North of Baffen Island, then weaves down to pass South of Victoria Island and out past Alaska. It is a whopping 4000 km shorter from Asia to Europe through the passage than traveling through the Panama Canal. Normally it is full of pack ice but that could change by 2010.

Canada's artic archipelago represents a fairly vast landmass that hasn't been prospected. However, not all nations explicitly recognize Canada or Denmark's ownership of their northern territories. The USA and Russia periodically make noises regarding ownership, and more frequently claim that the Northwest passage is a international waterway.

Recently Denmark has been making signs that sound like they are backing down. This is too bad, since the best result for both countries would be a vigorous and noisy defense of their territory. While the rest of the world may view the struggle as akin to midget wrestling, it would firmly demonstrate that both nations are annoying yappy dogs that will not hesitate to protect their northern possessions. Ottawa should take a lesson from the games the USA is playing with NAFTA at the moment and take a look forward to the future.

21 August 2005

I recently read Peter Odell's Why Carbon Fuels Will Dominate the 21st Century's Global Energy Economy (2004). Peter Odell is obviously the polar opposite to the peak oil/global warming doomsayers. Odell basically states that, as a proportion of our energy consumption, coal was the primary fuel of the 19th century, oil of the 20th, and natural gas will be the carbon fuel of choice for the 21st century. This is the first real book that I've read on carbon fuel issues, so I am going to give it a very heavy-handed review. Later I'll probably give the same treatment to Campbell or one of the other peak oilers.

Odell points out that as nations move into the post-industrial age, their economic energy efficiency improves. This is due to the fact that industrial activities, such as the production of steel, are inherently energy intensive. Service industries, in contrast, are much more energy efficient. Technology also plays a huge role in energy efficiency. The USA, for example, has seen economic growth in terms of percent GDP, outstripping the growth in energy demand for a while now.

Relationship between Energy Use and Economic Development over Time

Odell proposes that nations progress along the above curve as they develop, and that most of the developed nations are already on the plateau. Presumably China and India are on the steep part of the curve as they industrialize. This isn't the graph from his book, I just created this in MATLAB using the function 1/sqrt(1+x^2) but it closely mimics the shape of his curve and concept of an ultimate energy consumption limit. This curve converges to 1.0 as time goes to infinity.

What this suggests is that due to a move away from industry, improvements in energy efficiency, and the slow-down in population growth that energy consumption will eventually stabilize. Odell, while he presents this concept, oddly did not attempt to fit any nation to this shape. This is unfortunate as it would be very interesting to see where populous developing countries like China and India are on the scale.

Odell also focuses a lot of the development of unconventional oil (e.g. tar sands) and unconventional gas (e.g. coal-bed methane), and how they extend the lifetime of those resources. Odell claims that conventional oil will peak by 2030, but that due to the growth of unconventional sources overall oil production will not peak until 2060. Odell thinks that gas production will pass oil in 2040. The peak for natural gas will not occur until 2090, but by that time the world will have consumed 90 % of gas reserves. While he clearly believes in the long slow plateau for oil, he still shows gas going into a nose-dive after its peak.

Odell briefly presents the theory of abiogenic origins for hydrocarbons that was developed in the Soviet Union. Essentially the theory states that hydrocarbons are produced by chemical processes deep in the Earth's mantle. They are then forced up towards the surface. Apparently some reservoirs in the former Soviet Union are found in the crystal zone for rock, where the heat causes a phase change from amorphous to crystalline. Odell takes this as the possibility that some existing oil and gas reservoirs are being fed by active conduits to the production zones deep in the mantle. Hence, they may be renewable. Furthermore there may be additional reserves much deeper down than the fossil theory of carbon fuels would suggest.

This does beg some fairly obvious questions to a layman such as myself. For example, why is oil and gas found primarily in sedimentary rock, which clearly hasn't worked its way down deep enough? Why doesn't oil or gas naturally find its way through rock formations to the surface? Why does it get stuck? If oil is renewable on the order of human lifespan, how come the world isn't swimming in oil? Odell points out that some 4000 papers have been published in the former Soviet Union. Unfortunately, that's very few papers over a time scale of 55 years. If many of them are in Russian, they aren't readable by the bulk of the scientific community and so they don't count.

I have a number of criticisms to make of the book:

The book is short and succinct; it is only about 128 pages in a soft-cover novel format. The book presents a fair number of numbers, tables, and graphs, but it is devoid of descriptive statistics. I was surprised to see that he made no attempt to quantify the error in oil reserves and the like. Similarly, he does not conduct any sort of sensitivity analysis on the important parameters for supply and demand. For that matter, he doesn't identify the important parameters.

His projections for the growth in consumption for coal and oil seem simplistic. He sets them at 1.5 % and 2.0 % (sometimes 1.5 %) respectively, without justification. Similarly he sees conventional gas production grow at 3.0 % per year over the next 25 years. He does not attempt to actually use his curve for developed nation energy consumption to drive his analysis of say, the 25 largest economies in the world. This begs the question: why did he bother to present it?

Odell assumes that carbon-fuel prices will remain very low, and renewables will not improve. There is contrary evidence in his own book. For example, Germany maintains 6.0 % of the world's coal reserves, but is the 3rd largest importer. As Odell states:

Germany, the declared reserves of which ought probably to be significantly discounted as most of the country's coal is either uneconomical to produce, or too polluting to use in the context of stringent emissions policy...

The idea that reserves claims in general are inflated because not 100 % of reserves can be economically exploited does not appear to have occurred to the author. Consider, however that he shows that world estimates for coals reserves went from 8,166 Gton in 1936 to 11,423 Gton in 1976 to 6,246 Gton in 1997. Consumption was only about 250 Gton over this time. Even worse, he states that, "Of this lower total in 1996, however, almost 50 % was defined to unlikely to be exploited except in extremis." So reserve estimates have been shown to be extremely overly optimistic for coal -- the oldest carbon fuel -- but Odell assures the reader that due to seismic tomographic imaging that oil and gas reserve estimates are reliable. It is probably worth pointing out that coal reserves are quite enormous. Since an overriding theme is that the most likely future is one where historical trends continue, the failure to examine the volatility of coal/oil reserves represents a major oversight of that theme.

While Odell gives credit to technology for keeping the price of carbon fuels down and improving energy efficiency, he does not extend the same faith in R&D to renewables. This is fallacy. Clearly, as carbon fuel reserves are depleted, the highest quality reserves are consumed first, which results in an upward pressure on prices derived from lower quality deposits (like tar sands). This isn't due to just technology, the energy return on energy investment (ERORI) has been falling steadily for oil since the 70s. If the historical trends of increasing carbon fuel prices and decreasing renewable prices continue, the price break should occur before 2050. At that point, we will have a paradigm shift in the economy because non-fuel consuming "renewable" power production will have become cheaper than oil or gas. Some sort of full-width at half-maximum clearly needs to be incorporated into the analysis of carbon fuel reserves.

He uses the abbreviation viz. (for videlict) excessively. Some editor should have broken him for it years ago. It's too late now.

Overall, the tone of Odell's book is that historical trends are likely to continue as is. I think a major reason for writing it is simply to counter some of the arguments of the peak oilers. His claim that natural gas will replace oil as the fuel of choice in the 21st century seems natural, assuming that renewable energy technology remains at a standstill. He does believe in the introduction of hydrogen reformed from carbon fuels, and sequesterization as a means to control CO2 emissions. He does not believe that the case for global warming has been made, but he does acknowledge that public opinion and government legislation will cause a shift regardless. Overall I find some of his theories intriguing, but I think his proofs are grossly insufficient. His attempt to predict world energy demand by sector for one hundred years is a bit silly. Science and technology can and probably will create paradigm shifts. He may as well written about the oil consumption of flying cars.

Addendum:

Peak oilers will probably be bouncing off the walls after this 10 page report from the NY Times Sunday Magazine.

20 August 2005

There are two tools available on the internet to do modeling of renewable systems. These systems are useful for bloggers because they let you do quick and dirty simulations, and pop out some good numbers and (more importantly) some pretty pictures.

RETScreen is available in many different languages. It uses Microsoft Excel to perform analysis of renewable systems in a number of different configurations. It generally evaluates the performance of systems based on statistical monthly averages. RETScreen is largely a stand alone system -- it has a lot of geographical information built-in. The documentation of RETScreen is a strength. Many people will want to read the documentation simply because it helps to explain the issues involved in renewables. Economics modeling is the strength of RETScreen.

HOMER is a stand alone program, and as such it can handle a much denser simulation. While RETScreen might split its model into monthly chunks, HOMER can handle fluctuations on an hourly basis. This makes HOMER useful for modeling the intermittancy of solar and wind power. HOMER is also capable of doing brute-force system optimization, given a number of variables. While HOMER is more powerful than RETScreen, it requires much more in the way of data inputs. As such, HOMER is better suited for the more advanced user. Personally, since I don't have access to actual utility power data, I tend to simulate data in MATLAB. HOMER's economic model is not comparible to that of RETScreen.

Hopefully some people who weren't aware of these FREE tools will find them useful.

19 August 2005

From an engineering perspective, the entropy content of a unit quantity of energy is representative of the amount of useful work that can be derived from it. By work I generally mean move something; useful work might be turning a drive shaft, for example, as opposed to waste heat that cannot.

Thermodynamics gives us a relationship between entropy S and energy E (taking some mild liberties):

dS/dE = 1/T

T in this case is the temperature. High entropy content is bad, so we can see energy that can achieve a high temperature must have lower entropy and hence be capable of doing more work for a unit mass. This follows obviously for chemical fuels. Based off their combustion temperature, the Carnot cycle predicts the theoretical maximum efficiency with which they can do work.

efficiency = (T_hot - T_cold) / T_hot

This result is used in something called exergy or availability analysis which is based off the Carnot cycle efficiency limitations. Exergy is really a wolf (entropy) in sheep's clothing. I won't go into more details on exergy at this time.

So we can easily figure out the efficiency of chemical fuels and from that either their exergy or entropy density, whichever you prefer. But how about electricity?

We might just say that your standard best electric motor has an efficiency of 95 % and leave it there. But that doesn't really tell us what the fundamental limit is. After all, superconducting electric motors can do better, and do.

I have been digging around in research journals looking for an answer, but found nothing thus far. As such, I basically decided to do some basic analysis. The basic model for a metallic conductor -- the Drude model -- states that conduction electrons move freely in a conductor as a free electron gas. A correcting factor, a damping time Tau, is inserted to reflect the collisions electrons can have with crystal defects and phonons. Tau can be derived from conductivity.

From Tau, we can find the drift velocity of electrons under an electric field,

v = e * electric field * Tau / m

where e is electron charge and m electron mass. And we can relate the temperature of an ideal gas (which electrons are in a pure sense) to the individual kinetic energy of an electron,

0.5 * m * v^2 = 1.5 * k_b * T

where k_b is Boltzmann's constant. Solving for temperature I find that,

T = 3 * (m /k_b)(conductivity*electric field/electron density*e)^2

The conductivity times the electric field is the current density in a conductor (usually abbreviated J, and I get the distinct impression I'm doing this ass backwards). One could relate the current density to the power density (p) and potential (voltage - V):

J = p/V

However, I think I have again taken a bigger than blog-sized bite, so I'll stop and leave it as an exercise for the reader to realize that the entropy content of electricity is very low indeed. The result that you should take from this is a realization that electricity is the best means of carrying useful work that we have, and probably will ever have.

A comparison of electricity to hydrogen is very illuminating. The 2nd law of thermodynamics is rather explicit. If you are reading about a hydrogen powered system, take note of its electricity powered equivalent. In all likelihood, the electrical system is more efficient. And in all likelihood, if an electrical system outperforms hydrogen now, it probably always will. I can see now that I probably should have just spewed forth numbers and arguments regarding revesibility rather than doing the analysis, but I do call myself Entropy Production for a reason. Among chemical fuels, hydrogen is king when it comes to an entropy (or exergy) analysis. It can do more work per unit mass than any other fuel (except maybe Acetylene). However, it remains just a chemical fuel.

17 August 2005

Just a quick comment. I am trying to discuss some issues in solar power but I find my arguments quickly degenerating into science babble.

Most people who are reading this blog probably know that the visual spectrum varies from about 700 nm (red) to 400 nm (violet). Beyond those limits are the near-infrared and ultraviolet spectrums, respectively. Unsuprisingly the majority of the energy emitted by the sun is in these wavelengths -- our eyes have evolved to see it. The Sun has a surface temperature of about 5750 K (it varies somewhat). This corresponds to a peak wavelength of about 504 nm (green).

There is a fairly simple way to relate the energy of a photon to that of an electron. Energy relates to wavelength by the formula:

E_n [eV] = 1240 nm / lambda

So a photon with a wavelength of 1240 nm in the infrared has an energy of 1 eV. An electron volt (abbreviated eV) is the energy gained by an electron accelerated through an electrostatic potential of one Volt. We can convert 1 eV = 1.6 x 10^-19 Joules. As you can see an eV is a small quantity of energy.

You'll often see band gaps in photovoltaics expressed in terms of eV. Silicon is characterized as having a band gap of 1.1 eV. In comparison the sun's peak wavelength has an energy of about 2.5 eV.

15 August 2005

Quite often we see suggestions for solar energy systems that rely on mirrors or lenses to collect and concentrate insolation. Solar-thermal devices need the higher irradiance to achieve the temperatures that make thermal cycles practical. Some photovoltaic devices, like multi-junction cells, also need higher irradiance to reach their full potential. Unfortunately, there is one giant drawback to using optics to focus sunlight: it doesn't work when it's cloudy.

When the sky is clear, light from the sun arrives in parallel rays that can be easily focused. For an example, I modeled a 1 m diameter parabolic mirror receiving 785.4 W of power (equivalent to 1000 W/m^2). I put a 10 cm diameter collector near the focus to collect the light.

Figure 1: Ray trace of a parabolic concentrator

A detector on the end of my collector can record the irradiance map.

Figure 2: Irradiance map under direct sunlight

As you can see, the parabolic concentrator creates a nice, even irradiance on the collector. Most of the energy incident on the concentrator ends up on the collector, except for the occlusion in the center caused by the collector itself. Some 99.3 % of the incoming light ends up on the collector surface. In the real world, due to Rayleigh scattering off the atmosphere, performance is not quite this good.

However, when the sky clouds over, the amount of light scattered by the atmosphere increases enormously and the light that hits the Earth's surface is diffuse rather than specular. We all observe this through the obvious fact that there are no shadows when it's overcast. The impact on the performance of optics is dramatic. I won't bother showing a ray trace, since it looks like garbage, but when I diffuse the incoming light the concentrator performance drops by two orders of magnitude.

Figure 3: Irradiance map under indirect sunlight

The total irradiance on the detector has dropped from 780 W to 7.6 W. Obviously, this is no longer sufficient to drive a Stirling engine.

The local climate and in particular the clearness index is critical to determine if optics can be used to concentrate solar energy. The clearness index is the ratio of extra-atmospheric insolation to insolation measured at the Earth's surface. The ideal ratio is 1.0. Clearness index data (and lots of other solar relevent information) can be found at NASA's Surface Meteorology and Solar Energy website:

Some people have been working on non-focusing optics to achieve better economics. They often use thin refractive lenses to focus light to a slightly smaller blob. Generally these are designed to increase irradiance by a factor of four or so, not tens. The goal with non-focusing optics is to use a cheap concentrator to increase the irradiance on an expensive collector (like a photovoltaic cell).

For the curious, I used the optics modeling package Zemax to produce these results.

12 August 2005

Engineer-Poet provided links to AC Propulsion White Papers on the idea of using a vehicle's battery capacity to perform ancillary services. Alas, it appears I have not yet had an original idea in the field of entropy production.

The top paper is the one to read, A Vehicle-to-Grid Demonstration Project: Grid Regulation Ancillary Service with a Battery Electric Vehicle. It seems they actually put together an electric vehicle and simulated it operating as a voltage regulator. The paper is remarkable numbers free for a real-world system. Since they are talking about an all-electric vehicle, the electricity numbers are way higher than what we need for a plug-in hybrid. In particular, they need special, high power throughput connections to handle the load. I think most people will agree, the all-electric is dead in the water, since it will never be as flexible as a hybrid car.

I did find some issues that I take umbrage with. For one, following the end of driving in the morning, they recharge immediately to about 80 % capacity. I think this is largely impossible from a utility point of view (and a huge strike against the morbid electric concept). The peaking power necessary to simultaneously recharge millions of vehicles at the end of rush hour would be too expensive to imagine. For the plug-in hybrid, it is much more reasonable to trickle recharge slowly over several hours. After all, if you decide to take the car out for lunch, you aren't going to be in danger of running out of juice. You can also drive through more than one state/province per day.

The AC Propulsion paper does explicitly state that the value majority of services are provided while the car is connected at home, relative to the workplace. In a truly deregulated market, power would be cheapest to buy during the dead of night. I.e. the plug-in hybrid can peak shave.

The biggest philosophical difference between my proposal and AC Propulsion's is that they assume their electric can act as a sink and a source. I think the hybrid should be a pure sink. My idea is flat out simpler (and hence better). When power can only travel in one direction you avoid issues like net metering and up-voltage transforming. This does not impact the ability of thousands of plug-ins to do up and down regulation. Let me explain.

When recharging the plug-in, you are trickle charging it at some slow rate. If you need to regulate up, you stop the charging process. If you need to regulate down, you increase the rate of charging. The power flowing into the vehicle can be between zero and the maximum rate. Normally it will be at some median value to minimize the peak power demand on the utility and maximize the value of regulation service that it can provide.

The sort of flexible, predictable demand management provided by the grid-integrated plug-in hybrid is more valuable than supply side ancillary services. Imagine the ocean floor, with all its ridges and shelves. Now imagine the plug-in hybrid demand is the water. Notice how everything has become oh so much flatter. This metaphor essentially explains how flexible demand can fill in all the vibrations in the supply and demand for electrical power.

One concept they do introduce that is helpful is the idea of an aggregator in a deregulated market. The aggregator is a business entity that contracts out your vehicle's ancillary capacity and sells the aggregate, decentralized services of thousands of vehicles on the ancillary market. The aggregator basically acts as the middle-man, insulating the consumer from the complexities of the power market.

Imagine this scenario as an aggregator: an organization leases out the ground levels of all the parkades around your office. All the parking spaces on the ground floor are labeled in yellow 'plug-in ONLY'. You hop out of your car, plug it into the provided inverter, and go grab a coffee with the time you've saved hunting for a parking space. When it's time to go home, you unplug your car and drive out, laughing at the line-up of gasoline powered cars waiting to pay the attendant. The aggregator basically barters you a ground-level parking space. In return, you let them use the batteries on your car to regulate the grid. It is a fine example of creative marketing -- you get quality of life back by buying a hybrid. That's worth more than a few pennies on the kilowatt-hour.

In a deregulated market the aggregator might be able to function without much extra regulatory support. However, most power markets in North America are not deregulated, and legislation would still be needed in those areas.

11 August 2005

Canada is rapidly consuming all of its conventional natural gas reserves. Consumption is profligate, especially South of the border. In the USA, supply has not kept pace with demand, due to underinvestment and rampant NIMBYS (Not In My BackYard Syndrome). As a result, the price of natural gas has skyrocketed to around $9.00 / Mbtu. Prices in other parts of the world, such as South America, are almost an order of magnitude less.

Canada is expected to exhaust its conventional natural gas deposits in about 3-4 years. New sources, such as coal-bed methane, will take over the slack, but there is serious danger of a supply disruption. That risk is forcing a lot of chemical industry corporations in Alberta to take a long look at coal gasification to insure they have a stable supply of natural gas. I feel there should be a nuclear steam plant supplying all the bitumen refineries in Fort McMurray, but I don't really have a lot of political influence with King Ralph.

Another possible fix is the construction of a Liquified Natural Gas (LNG) terminal in Prince Rupert. The federal and provincial governments both recently promised money to expand the port, and it is the closest West Coast port to Edmonton (and Alberta's chemical industries). Excess capacity could be shipped down South. Shipping liquified natural gas from South America, to Prince Rupert, and then down back to the Northwest USA may seem absurd. That's generally where NIMBYS gets you, however.

09 August 2005

Now, for part two of encouraging the introduction of the (plug-in) hybrid:

The best way I know of to encourage more efficient automobile buying habits is called a feebate. A feebate is a zero-sum tax-slash-rebate on motor vehicle fuel economy. Car buyers with fuel economy above the mean get a cash rebate, while gas guzzlers must pay an additional tax. The government simply acts as the creditor in the exchange -- it gains no net tax revenue. E.g. it is a very socialist concept. The nature of the feebate helps offset the higher capital investment of a hybrid vehicle. Increased mileage is, of course, a built-in reduction in operating costs.

There have been a number of academic papers on the feebate concept. The most recent one, which I will regurgitate here, is

The authors examined a number of feebate programs, at a $500 and $1000 per 0.01 bushels per rod rate (or that might be gallons per mile). The impact of the program on average mileage is heavily determined by how far-sighted the consumer is. Since the consumer is myopic with a 4.67x diopter, most of the analysis is based off a 3-year amortization period, rather than the assumed 14-year lifespan of the vehicle. The last key point, is that they aren't using an absolute zero-sum feebate -- there is a very small ding to government finances.

The results?

$500 per 0.01 GPM: a 12.5 % gain in car mileage and a 25.1 % gain in light truck mileage.

$1000 per 0.01 GPM: a 24.6 % gain in car mileage and a 40.7 % gain in light truck mileage.

They also looked at a pure gas-guzzler tax or a pure rebate. Neither is nearly effective as the combination. The results appear quite significant, especially in terms of the impact on the evil soccer mom's SUV.

One of the conclusions is that buying patterns aren't significantly altered by a feebate. Instead, it mostly acts to facilitate the purchase of more technologically advanced models. Overall unit sales are slightly decreased, but the overall monetary value of car sales increases with the higher capital costs. Of course, this favours Japanese automakers over the comparatively mentally retarded domestic manufacturers.

Of course, any results from a study that tries to mathematically model consumer buying habits should be taken with a grain of salt. Treat these numbers as approximate only. I think a key of any feebate program is a truth in advertising provision. If advertisers are forced to include the feebate in their listed prices I think the impact on vehicle choice would be more pronounced. The study cannot effectively model the state of mind of the consumer. As we saw during the Middle-east Oil Crisis, if North Americans think oil is expensive, they will buy more fuel efficient cars. The actual price of oil is not so important.

As usual, I do not support specific subsidies for hybrids themselves. Otherwise we may see the absurdity of the Hummer Hybrid. Energy Outlook has a good discussion of this general issue here:

08 August 2005

The plug-in hybrid is a hybrid electric-gasoline powered automobile that has a significantly large battery capacity. Unlike the normal hybrid it can be plugged into the utility grid, allowing its batteries to be charged up with electricity rather than the gasoline engine. Some people like to call the plug-in hybrid the GO-HEV (Gasoline Optional-Hybrid Electric Vehicle). Personally, I have a pathological hatred of acronyms.

Since most people only commute a short distance every day, the plug-in has the potential to shift the majority of the transportation load off oil and onto electricity. At the same time, the plug-in hybrid retains the range and acceleration of its simpler basic hybrid cousin.

According to the Office of Energy Efficiency (of Canada) the total annual energy consumption of the passenger transportation sector was 1,322.4 PJ in 2003. That number corresponds closely the with the gasoline consumption of the nation. The freight transport sector consumed 945.8 PJ (mostly diesel). The total overall energy consumption was 8,457.3 PJ, so passenger transport consumes about 15.6 % of the total energy production of the country. About half of the cars on the road are driven less than 30 km a day. Current plug-in hybrid technology is easily capable of meeting this demand. Thus, in a country of about 33 million people, the plug-in hybrid is realistically capable of shifting 500 PJ of load from oil to electricity. This works out to about 11.5 kWh per person, per day! That's a lot of electricity demand. I personally only consume about 7 kWh per day, and I live alone, have electric baseboard heat, and an electric range.

The major drawback of the plug-in hybrid is its higher capital cost from the extra battery capacity. The other issue of the plug-in is that it is hardly a better solution if the electricity comes from coal or some other polluting resource.

The solution is two-fold. The first solution is to give the utility companies charging control over the plug-ins. This necessitates not only a 220 V wall plug, but an internet connection for a plug-in hybrid. The other side of the plug-in hybrid coin is a feebate program to reduce the associated capital costs. I will have to talk about that in another post.

The advantage here is that it creates a large, flexible demand for the utilities to fill (obviously the utility will have to guarantee a given level of charge in the morning). Why does this matter? Let's take a look at Texas' daily ramping power demand:

We can see that the demand varies from about 30,000 MW in the early morning to a peak of 50,000 MW at mid-day. Texas' grid is almost completely independent of the rest of the USA. There's probably less than 1000 MW of transmission connectivity between Texas and the outside world. That's a big shift, as it means during the course of a day almost 70 % more capacity has to be warmed up and brought on-line. Generally speaking, this doesn't happen. Power produced in the morning is often simply wasted (known as power shedding). Giving the utilities control over charging hybrids will give them the ability to fill in that valley, operating more efficiently by smoothing the demand variance, or having the demand follow supply rather than the other way around. Of further benefit, it will allow much more reliable forecasting, so power will not be wasted by overestimating demand.

Just how much flexible demand can the plug-in hybrid create? Consider that I claimed plug-in hybrids could supplant 500 PJ per year for Canada, and Canada consumed about 1900 PJ of electricity in 2003. Then slightly over 20 % of total electricity demand could become 'flexible' through the wide-spread introduction of the plug-in.

Of course, plug-in owners need to be compensated for providing this service to the utilities. Collectively, transmission services such as load-following, voltage-regulation, spinning reserve, etc. are known as ancillary services in the grid world. With controllable demand, the plug-in can take over many of these functions when acting as a demand sink. How valuable are ancillary services? Typically they will run from $0.01 - 0.03 / kWh, a significant chunk of the cost of electricity. If plug-in owners are paid for the ancillary services they are de facto selling, and their rates are reduced appropriately, they could see a very significant reduction in the cost of running their vehicle.

This flexible demand is valuable now, with our current energy grid. If we introduce more and more intermittent sources (i.e. wind, solar, tidal, wave), that flexibility will become even more valuable. Hence renewables and the plug-in hybrid are complementary technologies.

This idea of course, will go nowhere without government intervention. As hybrid vehicles become more common, and the introduction of the plug-in appears on the horizon, it would be prudent for governments to establish rules reguarding their use and interaction with the grid. Otherwise we risk extra-strain on the grid, as demand skyrockets whenever rush hour ends and millions of plug-ins start eating electrons.

06 August 2005

The good news is there's a large increase in factory production capacity coming on-line in 2006. The bad news is that polycrystalline Si-cells are made from microprocessor industry waste Silicon. Once the PV industry runs out of waste Silicon to buy on the cheap and has to zone refine and grow its own crystal, the cost will increase permanently.

This is a good example of Germany "picking the winner" technology, even though it might not be economical. The Danes have 'picked' wind power for example. If they didn't have Norway's hydropower to tap into they would be screwed right now; the rest of their electricity production is cogeneration coal which can't load-follow worth a damn.

Government and politicians picking environmental power technologies is a good way to give environmentalism a black eye over creditability. In the future, this PV craze may hurt the economy of California and Germany through a poor return on investment. Photovoltaic could suffer the same credibility failure of nuclear that was going to be "too cheap to meter."

Subsidies should be directly aimed at CO2 and other pollutant emissions, not particular technologies. Let engineering and the economy sort out the winners and the losers on their own.

EDIT (August 8th):

I don't think I made my point very clear in my original post.

What I want to say is that these cost inefficient silicon solar cells have no long term potential for on-grid applications.They are way too expensive.Only novel photovoltaic and solar thermal projects have the potential to produce economical electricity.

These subsidies do nothing to encourage corporations to invest in solar power innovation.Instead, they only encourage companies like BP Solar and Sharp to over invest in Si-photovoltaic production capacity.There is no long term value in this investment.

If you want an example of how screwy government subsidies can stifle innovation look at how Boeing and Lockmart operate in the US aerospace industry.

Once publicly traded corporations start feeding at the public trough they are loathe to stop.

I just spent 4 days of my life attending the Microscopy and Microanalysis 2005 conference in Waikiki. I think Waikiki in Hawaiian means 'tourist trap hell'. Contrary to expectations, Waikiki beach is not populated by Maxim girls.

I was fortunate enough to win a student travel award from the Microscopy Society of America to go there and do a platform presentation. I even got a plaque, with gold leaf embossment. I got really excited as I imagined the surface plasmons of the glided plaque coupling with the fluorescent lighting to create that glorious and lustrous shine. The back side has a bevy of mounting options, so I can display it to my office mates at school. (This post has long been tongue in check if you haven't noticed.)

I learned a number of things by attending the conference.

Electron microscopes are phallus shaped -- everyone likes to compare.

Everyone uses their 'unit' to look at god, err... gold even. The more money you have, the better you can see god.

If you don't have the latest-greatest accessory for your unit, you can't do good research, unless you are a graduate student.

15 minutes is not enough time to say anything aside from, "clear as mud?"

Don't try and do wind sprints after being on an airplane or bus for 20 straight hours unless you want to pull both hip flexors.