Grin and win it.

Tag Archives: Solar Energy

Zooming into the King Abdullah University of Science and Technology (KAUST) in Thuwal, Saudi Arabia

I’ve spent the past week in Saudi Arabia at the Solar Future 2015 Symposium, an annual conference on the basic science, devices, and systems of solar energy, hosted by the King Abdullah University of Science and Technology (KAUST). KAUST is one of the world’s newest universities. It was established in 2009, but its massive endowment—$20B today, a small step from Harvard/Yale/Stanford-land—has accelerated its progress toward its stated goal of becoming a global “destination for scientific and technological education and research” and its unstated goal of becoming a gushing geyser of Nature and Science papers.

Nestled in the welcoming desert of western Saudi Arabia, KAUST is modeled after Caltech, which is a small technical school in the welcoming desert of the western U.S. It even borrowed Caltech’s president in 2013.

KAUST only offers graduate education (900 master’s and PhD students today). Notably, 70% of its students are international (4% from the U.S.). Even more notably, 37% are women. All this in a country where alcohol is forbidden (for some reason, grape juice without preservatives is in high demand…); books and movies are censored (KAUST has the only movie theater in the entire country); and women can’t travel without a male chaperone, drive a car, try on clothes while shopping, or even interact with single men. That said, KAUST is far more liberal than the rest of the country, so much so that entry and exit must be controlled by 2 stages of security checkpoints—lest some un-abaya-ed female driver wander off campus and confuse the hell out of the Mutaween (religious police). KAUST is the very definition of a campus community: Everyone in the community lives and eats and works and plays on campus—there’s simply nothing else around except desert.

Care for some non-alcoholic grape drink?

Of all the things I observed in Saudi Arabia—admittedly not much besides KAUST—I was most impressed and dismayed by the campus itself, which seems to be a collection of ultra-modern facilities with no one around to use them. Everything is new and shiny, from the rec center to the research labs to the central walkway (“the Spine”) to the campus library. Yet the whole place feels deserted; even at noon on a Tuesday, our tiny conference delegation far outnumbered the few KAUST community members in sight.

The research facilities are incredibly lavish; it’s almost as if the funding flows straight out of the Arabian desert. I can’t think of a single piece of equipment for nanoscale fabrication and characterization that they DIDN’T have. Here’s a quick snapshot of the shared labs at KAUST and the Solar & Photovoltaics Engineering Research Center (SPERC):

Take my word for it: These capabilities are enough to make materials and device researchers drool all over their bunny suits (and this is only what I saw in person!).

Hella gloveboxes.

The symposium itself was fascinating, with talks by academics and industryfolk on the full spectrum of topics relevant to solar energy, from the photophysics of bulk heterojunctions to the newest developments in crystalline silicon, from power electronics to energy storage. It was a small single-track conference (similar to a Gordon conference)—meaning there was only one talk, poster session, or event happening at once, and everyone stayed together the whole time, including at every meal. I really like this type of conference: It leads to more continuity, in the form of deeper conversations that bleed over from hour to hour and from day to day.

Among the many topics of the week, I had interesting chats about silicon feedstock and material scaling limits with Bruno Ceccaroli, whether quantum dot solar cells have a future with Victor Klimov (Los Alamos), solar’s role in combatting climate change with Greg Wilson (NREL), why fracture energy matters for solar cells with Reinhold Dauskardt (Stanford), and the future potential of perovskites with pretty much everyone, including my esteemed friends and colleagues at MIT (Sam Stranks) and at Stanford (Tomas Leijtens and Nick Rolston).

All in all, I’d call it a successful trip—especially since Saudi Arabia doesn’t issue tourist visas, which means I probably won’t get another opportunity to visit unless I take a job delivering new TEMs to KAUST.

I’ll leave you with a few selfies.

KAUST’s Hoover Tower

The Grand Mosque

Caught in C60

With Sam, Nick, and Tomas

Selfie with Sam.

22.30480639.099069

Advertisements

Rate this:

I just got back from a 10-day trip to northern Italy, a combination of work and play. The work part was for a research workshop with Eni, a gigantic company you’ve probably never heard of unless you have a thing for 6-legged dogs.

The 6 legs represent the 4 wheels of a car and the 2 legs of its driver.

As a token of goodwill and everlasting research funding (haha NOT), Eni shipped us a human-sized stuffed dog-dragon-thing. It has many uses, including taking up office space and scaring undergrads.

Eni is the Italian national oil and gas company, known by some as “the state within the state” for its outsized influence in Italian politics. The company is trying to reinvent itself as an “energy company,” drilling for crude oil in North Africa with one hand while funding solar research internally and at MIT with the other.

It’s a tough balancing act. Oil supermajors (Exhibit 1: Exxon) aren’t particularly well known for believing in human-caused climate change, much less supporting a wholesale shift away from their hydrocarbon lifeblood***. It’s not clear to me whether their support of renewable energy research is merely a good PR move or reflects a genuine desire to save us all (OR perhaps just a hedge in case the world actually decides to do something about climate change). The real answer is probably (d) all of the above.

***Although it’s noteworthy (but not altogether surprising) that a few small oil companies—BP, Shell, Eni, Total, and Statoil—recently urged the U.N. to place a price on carbon.

In any case, Eni seems to be taking one small step in the right direction. During our visit to Eni’s headquarters in San Donato (just south of Milan), CTO Roberto Casula at least used all the right words in talking about climate change: that we need to start the low-carbon transition today, that an investment in renewables is an investment in the future, that Eni needs to become not just an oil company but an energy company, etc. And Eni has its own team of 20 researchers working on solar cells. They do great work on polymer PV, but I can’t help but laugh/cry when an “energy” company with 85,000 employees dedicates just 0.02% of its workforce to developing a key part of the future energy system. Maybe I’m too sentimental.

—

After the workshop in San Donato, I did some solo traveling, visiting the World Expo in Milan and posing as an Italian in Turin (~1.5 hours west of Milan by train) with Francesca, an MIT friend and colleague who grew up there and was kind enough to show me around her hometown and introduce me to sambuca.

Here are a few highlights from the trip:

With MIT colleagues in San Donato

Duomo di Milano. Right at the center of Milan, the Duomo is the largest cathedral in Italy and far too ornate to comprehend.

At the Duomo with Vladimir and Pat Doyle. (Selfie photo credit: Vladimir Bulović)

Papa Francesco visited Torino while I was there. Unfortunately I missed his call…

Oddly enough, Turin is the home of one of the biggest collections of Egyptian artifacts in the world. Here’s the Gallery of the Kings at the Egizio Museo di Torino.

Focaccia at Perino Vesco. Don’t miss this bakery in Torino.

Taking a ride in the Turin Eye, the world’s largest tethered hot-air balloon.

TIL 1 kilogram of apricots = ~25 apricots. Clearly I didn’t know what a kilogram was. I wanted a snack; I got a stomachache.

Sardinian cuisine with Francesca and friends, captured in full Polaroid glory.

Politecnico di Milano, the largest technical university in Italy and sworn enemy of its Torino counterpart.

Navigli: Ancient canals and home of Milanese nightlife.

The Brera Gallery is a very cool art museum in the heart of Milan.

The Kiss (Hayez 1859). So Italian.

Kicking it at the World Expo. The theme was “Feeding the Planet, Energy for Life”, aka FOOD!

Main street of the Expo.

Food.

20,000 LEDs form a room-sized floor display in the China pavilion.

With Francesca at the China pavilion.

Nutella restaurant? Count me in.

Vertical farm at the Israel pavilion.

Aquaponics = Aquaculture (raising fish in tanks) + Hydroponics (growing plants in water). The fish poop, bacteria break down the poop into nitrates, and the plants use the nitrates as fertilizer. Unfortunately you still have to feed the fish.

What is this thing, and why is it here?

The UK beehive pavilion. The pavilion is connected to an actual beehive in Nottingham: In the pavilion, speakers and LEDs generate noise to reflect the real-time activity of bees in the actual hive.

Good ole USA.

Supermarket of the Future: This was very cool. It’s an operational supermarket with a bunch of high-tech stuff. Point at food and the screens above show you the nutritional information, price, etc. Robot arms pick up and package fruit. And after you check out, you get to carry your grocery bags around the Expo all day. Yay!

Rate this:

With my advisor Vladimir and former U.S. Secretary of State/Treasury/everything George Shultz

I gave a talk last week that made me a bit nervous.

The audience? The External Advisory Board of the MIT Energy Initiative (MITEI). The EAB deeply influences MIT’s energy research direction and represents a large fraction of research funding on campus. It also happens to have on it some of the more distinguished people and fancy titles in the world of energy and climate, including former members of Congress, ex-Secretaries of State/Energy/etc., heads of national organizations, VCs, oil and gas executives, and a couple Nobel laureates for good measure.

Yep. I was a bit nervous.

This was an unusual opportunity. I was invited to give a 15-minute talk on solar photovoltaic technology as a member of the MIT Future of Solar Energy Study panel. The Solar Study is the latest of a series of MITEI-sponsored “Future of ____” reports meant to inform the public and guide policymakers in D.C. on the current status and future trajectory of leading energy technologies. The report won’t be ready until the end of the year (fingers crossed), but naturally the EAB wanted to hear all the juicy details firsthand.

Given the audience, I was expecting to be regularly interrupted and thoroughly questioned, especially by our friends from Shell and Saudi Aramco. I was mistaken. My talk went smoothly, no one interrupted, only a couple people fell asleep, and several questions during the panel sparked interesting conversations. No sweat.

That evening, the EAB and all of the speakers ended up at President Reif’s house for dinner. A few highlights: (1) Kerry Emanuel gave a talk about the science of climate change. (2) My advisor Vladimir introduced me to George Shultz, former U.S. Secretary of State (during the Reagan administration) and current Hoover Institution fellow at Stanford—93 years old and still going strong. (3) Dinner was delicious.

What a day.

***After the panel, I was chatting with one of the board members, Frances Beinecke (NRDC president), and found out that she marched in the People’s Climate March! Awesome.

Rate this:

Never would I have thought that something as distant and mundane as the climate could upset me so.

But I’ve been pondering climate change more and more with every passing week, and more and more it’s becoming clear that we (humans) have two options if we want to avoid the icky consequences of climate change:

Stop burning fossil fuels for energy.

Find a new planet to live on.

Tongue-in-cheekiness aside, I truly do believe in Option 2—space exploration—as a vital long-term goal for humanity. Unfortunately, the time for climate action is limited, and new worlds aren’t built in a day. So today I thought it would be a good idea to link climate change with my own interests and background in energy, to summarize and share some of the unadulterated facts. I hope to inspire you to take a moment to think, to personalize and internalize the numbers, to dedicate your own prodigious intellectual resources to finding a solution to our shared little problem.

I draw numbers and inspiration (and a few direct quotes) from a 2009 review paper by Stanford’s Mark Jacobson, whose work I highly recommend to anyone interested in energy and the environment.

Climate Change and Air Pollution

Key point: Climate change and air pollution have a host of deleterious effects on human and environmental health.

Climate change (or global warming) is caused by an increase in the greenhouse effect induced by the following human inputs into the atmosphere (from most to least significant):

Air pollution is the 6th leading cause of death worldwide; it has been implicated by the WHO in 2.4 million deaths worldwide each year. Indoor and outdoor pollutants can increase ER visits, reduce worker productivity, and induce all of the following health conditions: asthma, respiratory illness, cancer, and cardiovascular disease.

Slightly troubling note: Human-created aerosol particles in the atmosphere—sulfates, nitrates, chlorides, ammonium, potassium, carbon, H2O—mask ~50% of the current global warming potential; if we reduce air pollution, much of that warming potential will be unveiled.

Energy Production and Use

Key point: Both climate change and air pollution are fundamentally linked to energy production and use; both arise from the burning of solid, liquid, and gaseous fuels. Solving these problems will thus require fundamental changes to the energy sector.

A few important numbers on energy and emissions:

Global energy production = 133 PWh/yr = 15.2 TW (2005)

Global electricity production = 18.2 PWh/yr = 2.1 TW (2005)

25% of total US CO2 emissions are directly exhausted from vehicles on the road; another 8% are due to production and transport of fuel. Operation of fossil-fuel-burning vehicles thus accounts for over 33% of total US CO2 emissions, which we can eliminate by powering vehicles with non-emitting or lower-emitting technologies.

In the US, each person uses an average of ~14,000 kWh of electricity per year.

When evaluating alternative energy technologies, we need to think about several key metrics and consequences:

Resource availability: How much raw energy is available in a given energy resource? A renewable technology that’s extremely efficient and always available may have very little impact on climate change or pollution if the available potential energy is not enough to satisfy a major fraction of total energy demand.

Total CO2 emissions: Emissions can be measured in grams of CO2 equivalent per kWh (CO2e/kWh): In the context of emissions (see clarification here: http://www.skepticalscience.com/Carbon-dioxide-equivalents.html), this unit refers to the amount of CO2 required to produce a global warming effect equal to whatever is emitted by a particular technology, for each unit of energy produced.

Lifecycle emissions: How much CO2 (equivalent) is emitted during the production, operation, and disposal of a given technology for each unit of energy generated? Higher lifecycle emissions make a technology less attractive for mitigating climate change and air pollution.

The energy payback time is the amount of time for which a power plant must operate to produce the same amount of energy that was required to build it.

A longer technology lifetime amortizes the fixed CO2 emissions over a longer period of energy generation, reducing the emission per unit of energy produced.

Opportunity-cost emissions: If a renewable generation source takes a long time to plan and deploy, significant emissions may occur during that period; those emissions could have been eliminated if a more quickly deployable renewable source were used instead.

The planning-to-operation time encompasses the time to site, finance, permit, insure, construct, license, and connect the technology to the grid.

Such delays allow existing, higher-emitting power generation plants to operate longer, and limit our ability to respond to changing energy needs and the need for immediate action against climate change.

To minimize opportunity-cost emissions, we want generation sources with a long lifetime and fast planning and installation.

Land use and effects on wildlife: Renewable energy sources tend to require more land area than conventional power plants, due to the low energy density of renewable resources. Appropriation of land for energy generation may encroach on or damage wildlife habitats, as well as reducing the total amount of land available for alternative human use.

Vulnerability to disruption: Disruptions in the energy supply affect all human activities that are tied to the electrical grid. Centralized energy sources (e.g., nuclear plants) increase the risk of disruption compared to distributed sources (e.g., wind turbines, solar cells), since a single localized catastrophe can wipe out a large portion of local generation capacity.

Intermittency: Many renewable resources are by nature dependent on the caprices of complex and unpredictable environmental systems. Wind turbines produce energy only when and where the wind blows. Solar cells and solar thermal converters operate efficiently only when the sun shines on them directly, with no clouds, haze, or human obstructions in between. An energy technology that suffers from high intermittency may not be able to reach high penetration—i.e., to satisfy a large proportion of electricity needs—because it would be unable to fulfill electricity demand consistently. Other technologies must then compensate for the shortfall.

Intermittency makes grid-planning more difficult and increases the stress (from ramping) on other generation technologies.

The capacity factor is the average fraction of the peak rated power output of an energy source that is sustained continuously over the source lifetime.

The capacity factor can be calculated by dividing the total energy actually produced in a given time period by the total energy produced if the source produced 100% of its peak output for that entire period: If a 100Wp solar cell produces 50W for 6 hours a day and 0W for the remaining 18 hours, the numerator would be 50W*6h = 300Wh, the denominator would be 100W*24h = 2400Wh, and the capacity factor would be 300/2400 = 0.125.

High intermittency reduces the capacity factor of a given technology.

Several strategies may reduce the impact of intermittency:

Average out variations by combining generation sources from wide geographical areas. This requires a significant investment in grid infrastructure for long-distance electrical transmission.

Use smart meters to charge and discharge electric vehicles at appropriate times to match electricity demand to supply.

Deploy an efficient utility-scale energy storage technology: Charge a battery with a solar panel when the sun is shining, and discharge the battery at night to charge an electric vehicle or iPhone. Such a technology does not exist today.

Forecast the availability of renewable sources more precisely (e.g., when the sun will shine and when the wind will blow) and ramp up other power plants to offset any excess demand.

Alternative Energy Technologies

Note on energy units: Although terawatts (TW = 1012 watts (W)) are units of power (energy per unit time), I will use it here to describe the total energy available for each renewable technology. To facilitate comparison with our global time-averaged energy production and demand (~15 TW total, of which ~2 TW is in the form of electricity), I’ve converted the total available energy (given by Jacobson in petawatt-hours (PWh) = 1015 watt-hours (Wh)) to the continuous available power (in W) by dividing by the number of hours in a year (8766 ~= 9000 ~= 104 for order-of-magnitude estimates). I think the lack of consistent and personally relevant units cripples clear discussion and understanding of energy issues (what the hell is a tonne of oil equivalent (toe) anyway?)—I like to stick with two units: watts (or kW, MW, GW, and TW) for power, and watt-hours (primarily kWh) for energy.

One primary benefit of CSP is the built-in storage capacity: By heating up and storing a material with sufficient heat capacity (e.g., molten nitrate salts (NaNO3, KNO3), steam, synthetic oil, graphite), we can counteract the intermittency of solar energy.

Total available energy: ~1000 TW = ~2/3 that of solar PVs (the area required per MW is ~1/3 larger for CSP than for OVs)

Capacity factor: 0.13-0.25 (without storage)

Lifetime: ~40 years

Energy payback time: 5-7 months

Planning-to-operation time: 2-5 years

Total emissions: 9-11 g CO2e/kWh

Essence of PVs: Use an energetically asymmetric semiconductor device to absorb light and generate electricity.

Solar cells can be deployed in large farms (usually 10-60 MWp, proposed up to 150 MWp) or on rooftops (90% of installed PV capacity is currently on rooftops; a future estimate for that number is 30%). Even the largest solar farms are smaller (in peak capacity) than average fossil-fuel (~600 MWp) or nuclear plants (~1 GWp), hence reducing the risk for disruption of the energy supply caused by centralization.

Total available energy: ~1700 TW (~20% of this is realistically harvestable, given the low insolation at high latitudes and competing land uses)

Total emissions: 19-59 g CO2e/kWh (depends on insolation and amount of energy used in production)

Wind energy

Essence of wind: Convert the kinetic energy of moving air (wind) into rotational energy in a turbine to generate electricity.

Wind physics

The instantaneous power density of wind (usually 100-1000 W/m2) is proportional to the cube of the instantaneous wind speed. Finding locations with high wind speeds is thus crucial for successful wind power generation.

Wind speeds at a given height follow a Weibull distribution, which for wind can be approximated quite accurately by the Rayleigh distribution (a special case of the Weibull with shape parameter k=2).

Due to shear forces (the friction of air flowing across rough land), wind speeds increase roughly logarithmically with height: Taller wind turbines are thus more efficient. Modern turbines typically have a tower height of ~80 m (the hub or axis of rotation is at that height).

Wind energy requires the smallest footprint of all renewable technologies, although the total land area required is larger due to the array spacing between turbines needed for efficient operation (i.e., each turbine decreases the wind speed behind it, reducing the power density available to nearby turbines).

The area required for each turbine is approximately A = 4D x 7D, where D is the rotor diameter.

Total available energy: ~72 TW (~6 TW in US)

Average wind speeds of over 7 m/s are required for wind turbines to be economically competitive with other energy technologies. Such speeds exist over ~13% of global land area.

Capacity factor: 0.33-0.35

Lifetime: 20-30 years

Energy payback time: 2-4 months

Planning-to-operation time: 2-5 years

Total emissions: 3-7 g CO2e/kWh (lowest of all technologies surveyed)

Hydroelectric energy

Essence of hydro: Release the gravitational potential energy of water in an elevated reservoir, allowing it to flow through a turbine.

Hydroelectric plants are built on rivers, either by damming (to flood large areas of land and create a reservoir with water storage) or by introducing turbines that do not significantly alter the course of water flow.

Pumped hydro (pumping water uphill, storing it, and releasing it through a hydroelectric plant when electricity is needed) is the only energy storage technology that has been successfully demonstrated at the utility scale (~100 GW, compared to hundreds of MW for all other storage technologies), with storage efficiencies of 70-90%.

Hydropower is the largest installed renewable energy source globally, accounting for 17.4% (~36.5 GW) of total electricity production in 2005. Hydroelectric plants are often used as peaking plants since they can start producing power quickly (15-30s when in spinning-reserve mode) to match demand variations and smooth the intermittency of other generation sources—hydropower thus complements other renewables like solar and wind.

Total available energy: ~2 TW (5% has been tapped)

Capacity factor: 0.42

Lifetime: 50-100 years

Energy payback time: ~1 year

Planning-to-operation time: 8-16 years

Total emissions: 16-61 g CO2e/kWh

Ocean energy: Wave and tidal

Essence of wave energy: Convert the kinetic energy of wind-driven waves on the surface of the ocean to mechanical motion in a floating generator.

The power of a wave is proportional to the density of water (more dense => larger mass (per unit volume) => more kinetic energy), the group velocity of the wave (which is proportional to the wave period), and the height of the wave squared (the intensity or power of any wave is proportional to the amplitude squared).

Total available energy: 480 GW

Given that 2% of the world’s ~8×105km of coastline has wave power density greater than 30 kW/m

Capacity factor: 0.21-0.25

Energy payback time: 1 year

Planning-to-operation time: 2-5 years

Total emissions: 40-60 g CO2e/kWh

Essence of tidal energy: Use undersea turbines to harness the energy of oscillatory undersea currents caused by the gravitational attraction of the moon and the sun.

The ocean is continuously transitioning between high tide and low tide, with 4 such transitions (2 in each direction) every day. This predictability (6 hours in each direction) makes tidal energy a potential baseload generation source.

Tidal currents must have a speed greater than ~2 m/s for tidal power to be economical—lower than the 7 m/s required for wind energy, since water is more dense than air.

Tidal turbines are usually mounted on the sea floor, with the rotors either directly exposed or preceded with a narrowing duct to direct water toward them.

Total available energy: ~800 GW (~20 GW practical)

Capacity factor: 02.-0.35

Energy payback time: 3-5 months

Planning-to-operation time: 2-5 years

Total emissions: 34-55 g CO2e/kWh

Geothermal energy

Essence of geothermal: Drive water deep into the ground, allowing the thermal energy in the Earth’s crust to heat it up, and use the hot water to drive a steam turbine.

Dry and flash steam systems are used when geothermal reservoir temperatures are 180-370ºC or higher; both require the drilling of two boreholes, one for steam (dry) and/or liquid (flash) flowing upward and one for condensed water flowing downward.

The flash steam approach (most common today) converts hot liquid water into steam in a low-pressure flash tank, which then drives a turbine.

Neither system (currently) condenses greenhouse gases (CO2, NO, SO2, H2S) released from the underground reservoir during extraction, instead releasing them to the atmosphere and exacerbating climate change.

Binary cycle systems (~15% of current systems) are used when reservoir temperatures are 120-180°C.

Hot water rising from a borehole is used to heat and evaporate a low-boiling point “binary” or “working” organic fluid (e.g., isobutene, isopentane); the resulting vapor is used to drive a turbine.

Since they are closed-loop systems, binary cycle plants do not emit any greenhouse gases into the environment.

Because they can be operated at intermediate temperatures, binary systems may be the most promising and flexible of the geothermal approaches.

Total available energy: ~170 TW (but only ~100 GW at reasonable cost)

Capacity factor: 0.73 (good)

Lifetime: ~30 years

Energy payback time: 10+ years

Planning-to-operation time: 3-6 years

Total emissions: 16-60 g CO2e/kWh

Nuclear energy

Essence of nuclear (fission): Use slow neutrons to split the nuclei of heavy elements (e.g., U-235, Pu-239) into high-energy products (e.g., Kr-92, Ba-141, neutrons, gamma rays), which collide with and boil water to drive a steam turbine.

The most common nuclear fuels are uranium-235 and plutonium-239.

Uranium is typically stored as small ceramic pellets in metal fuel rods, which are used in a reactor for 3-6 years before being replaced.

The radioactive waste products resulting from nuclear fission must be stored in proper containment to isolate it from the biosphere for many thousands to millions of years (i.e., until the radioactive elements have decayed sufficiently).

The US has more nuclear power plants than any other country (~25% of the world’s total), followed by France, Japan, and Russia.

Total available energy: 0.4-14 TW (lower number for once-through thermal reactors, higher number for light-water and fast-neutron reactors)

Capacity factor: 0.81 (really good)

Lifetime: ~40 years

Energy payback time: 1-2.5 years

Planning-to-operation time: 10-19 years

Total emissions: 68-180 g CO2e/kWh

Fossil energy with carbon capture and storage (CCS)

Essence of CCS: Burn fossil fuels (coal in particular, since it’s abundant) but capture and store the resulting carbon emissions to reduce the impact on climate and air quality.

Worldwide, geological formations have the potential to store up to ~2000 Gt CO2, while we emit ~30 Gt CO2 every year. Clearly CCS is not a permanent solution.

Deep ocean sequestration of CO2 makes the ocean more acidic due to the formation of carbonic acid (H2CO3), and the sequestered CO2 eventually equilibrates with surface levels and is re-emitted into the atmosphere.

This option has been dismissed by most credible sources.

An alternative CCS approach is to combine CO2 with common metal oxides (e.g., quicklime (CaO), MgO, Na2O) to form solid carbonate minerals (e.g., CaCO3, MgCO3, Na2CO3) that can be easily sequestered. This approach requires a lot of raw oxide material and large energy inputs to speed up the reaction (via high temperatures and pressures).

Carbon capture technologies can reduce CO2 emissions by up to 85-90%.

But the energy required to capture and store CO2 will significantly reduce the power output (by 10-40%) of a power plant equipped with CCS technology.

And non-CO2 pollutants are still emitted, at even higher rates than before (more fuel must be burned to generate the same power output).

No major power plant today has successfully implemented carbon capture and storage.

Total available energy: ~1 TW for 200 years (~60% of annual electricity production)

Biofuels can be solids, liquids, or gases derived from organic matter: wood, grass, straw, manure, corn, sugarcane, wheat, sugarbeet, or molasses. These materials can be burned directly (e.g., for heating, cooking, and transportation) or converted into liquid ethanol for transportation.

The sugars and starches in crops are used fermented by microorganisms to create ethanol. Cellulose from switchgrass, wood, or miscanthus (a fast-growing grass) can also be converted into ethanol through fermentation, although the process is far slower than for the crops mentioned previously.

E85 fuel consists of 85% ethanol and 15% gasoline.

The amount of land required for bioethanol production is massive: We would need to triple the US land area dedicated to corn farming (to ~10% of the 50 states) just to power all US vehicles with corn ethanol.

The demands on freshwater for irrigation would be similarly immense.

The conclusion?

Wind and solar energy, coupled with baseload hydro and a bit of geothermal, can turn the climate change equation upside down, slashing emissions while improving the long-term sustainability of our energy supply. Biofuels and CCS do more harm than good. Existing technologies can make a big difference already; technology advances will make renewable technologies more and more attractive by both economic and climate-change metrics.

I went to a talk earlier this week by Robert Jaffe, an MIT physics professor. Jaffe does research on particle physics and quantum field theory, but this talk was on energy-critical elements (ECEs)—chemical elements that (a) are critical for one or more new energy-related technologies and (b) don’t currently have established markets and hence might not be available in large quantities in the future.

For renewable energy technologies, materials availability is particularly crucial because renewable resources tend to be diffuse. Consider solar energy: In a typically-sunny place like the Bay Area, light from the sun reaches the Earth with an average power density of ~200W per square meter. With a record-efficiency 1-square-meter solar panel (and high-efficiency storage), you might be able to power one incandescent light bulb around the clock. Not exactly awe-inspiring. (I recommend LED lighting anyway.) The amount of power we can get from a renewable resource is proportional to the area we can cover with solar panels or wind turbines or tidal energy harvesters, which in turn is proportional to the amount of material we need. To scale up renewable generation is to scale up production of the materials used in renewable generation technology—hence the importance of energy-critical elements.

Here are a few examples of current energy-critical elements and why we care about them:

(1) Tellurium (Te) – Used in advanced solar photovoltaics

Tellurium constitutes ~0.0000001% (one part per billion) of the Earth’s crust—it’s rarer than both gold and platinum.

Cadmium telluride (CdTe) currently dominates the thin-film solar cell industry (see First Solar). Suppose that this year we want to produce enough CdTe solar cells to generate 1 gigawatt-peak (GWp) of electricity—enough to power on the order of 500,000 homes (less than 1% of the US). We’ll need ~80 tons of tellurium, around 20% of global tellurium production. To get even 50GWp from CdTe solar cells, we need to increase tellurium production by an order of magnitude. As a point of reference, the world consumes around 2 TW of electrical power (average).

These rare-earth elements (REEs) are powerful permanent magnets used in wind turbines. Offshore wind turbines—which are often difficult to repair, for obvious reasons—benefit most from the use of reliable permanent magnets rather than more-complex induction-driven electromagnets.

(3) Terbium (Tb) and europium (Eu) – Used for lighting and displays

These rare-earth metals form oxides that serve as red (Eu2O3) and green (Tb2O3) light-emitting phosphors. Such phosphors are used in color TV displays to form subpixels and in standard white lights (e.g., CFLs) to approximate the warm color of incandescent bulbs.

The price of Tb and Eu has fluctuated in recent years due to China’s restrictions on exports of rare-earth elements.

(4) Rhenium (Re) – Used in advanced high-performance gas turbines

70% of the world’s rhenium production is used in jet engines (alloyed with nickel) to reduce mechanical deformation under high thermal and structural stresses.

(5) Helium (He) – Used in cryogenics and many research applications

Helium is pretty cool: It liquifies at a lower temperature than any other element and doesn’t solidify at any temperature. It’s also the least chemically active element and the only element that can’t be made radioactive when exposed to radiation.

Helium is used mostly for energy research, cryogenics—including for MRI scanners—and as a purge gas to flush out liquid hydrogen and liquid oxygen fuel tanks for rockets, since it doesn’t solidify even at ridiculously low temperatures.

But we’re running low: Helium is a byproduct of natural gas production, but since it’s lighter than air, it tends to float off into space once it’s released. And we can’t realistically recover helium from the atmosphere, where it resides at concentrations of ~5ppm.

We need to efficiently collect the helium released as we extract natural gas; otherwise we’ll quickly run out of a critical element with no known substitutes (at least until nuclear fusion becomes feasible, i.e., perpetually 50 years from now).

(6) Indium (In) – Used as a transparent conductor in touchscreens, TV displays, and modern thin-film solar cells

(8) Platinum (Pt), palladium (Pd), and other platinum group elements – Used as catalysts in fuel cells for transportation

Keep in mind that the “energy-critical” label doesn’t reflect any fundamental difference in the nature of elements within and without this classification, and the subset of elements designated as ECEs will change as energy technologies and our knowledge of materials availability evolve.

Side note: Many “rare-earth elements” are energy-critical elements as well, but are actually more common in the Earth’s crust than most of the above elements; REEs are “rare” simply because they don’t exist as free elements or in concentrated ores that are easily accessible.

Side note 2: Some unlisted elements (e.g., Cu, Al, Si) are also critical to energy technologies, but they have developed markets, exist around the world—i.e., geopolitical issues have less impact on their supply—and are used in many other applications, such that substitutes could be found for those applications and additional supply made available for energy applications if necessary.

Energy-critical elements might not be available because they just aren’t very abundant in the Earth’s crust—the only part of our planet we can reach right now. The crust is mostly made of oxygen, silicon, and aluminum; all other elements exist in very low concentrations and are often hard to isolate and extract. That said, the absolute availability of elemental resources probably shouldn’t be our primary concern. A more insidious barrier to the development of new energy technologies is short-term disruption in the supply of ECEs. Supply volatility causes prices to fluctuate, which in turn disrupts long-term extraction efforts and hinders large-scale deployment of technologies that depend on ECEs.

What constraints could disrupt ECE supply?

One primary peril is geopolitics. When ECE production is concentrated in only a few places in the world, international politics and trade restrictions may dictate the market, which is bad. Take platinum: The vast majority of global reserves of platinum-group metals are concentrated in the Bushveld Complex of South Africa. Technical, social, and political instabilities in South Africa could thus disrupt the availability of platinum, palladium, and other critical elements. Another example is China’s 2010 decision to restrict exports of rare-earth elements, a market in which China enjoys a near-monopoly thanks to its natural geological advantages. Rare-earth element prices spiked briefly before the market readjusted to the current and future possibility of limited supply.

But despite all the political talk about energy independence, keep in mind that the US currently imports over 90% of the energy-critical elements it consumes, and that’s not a bad thing. Different regions have different comparative advantages in the production of ECEs, and only through trade are efficient markets achieved. Complete ECE independence is simply not possible—e.g., we have no viable source of platinum—and even partial independence is not possible without sacrificing many modern technologies. Consider food markets: Can you imagine life in a food-independent US? We can’t grow nearly enough bananas, mangoes, cashews, coffee, or cacao to satisfy our massive national appetite, nor can we survive without. Why then do we expect full ECE independence?

Another potential risk for disruption lies in the joint production of energy-critical elements—particularly In, Ga, and Te—with conventional ores. Nearly all ECEs are extracted as byproducts of the mining and refining of major metals (e.g., Ni, Fe, Cu) with much higher production volumes and more established markets. The problem then is that the demand for ECEs does not drive production: Their availability is thus constrained by how much of the ECE is contained in the ore of the primary product, and supply is dictated by economic decisions based on the primary product rather than the ECE. This lack of market control renders ECE prices subject to the whims and fancies of major metal markets.

Adding to the uncertainty in ECE availability is the artificially low prices made possible by joint production: Since ECEs piggyback on the mining infrastructure already in place for major metal production, their prices don’t reflect many of the fixed costs of mining and refining. ECE prices will remain artificially low until by-production saturates—i.e., when enough demand exists that byproduct production can’t keep up, making independent mining of the ECE profitable. At that inflection point, however, new energy technologies developed and assessed using current ECE prices may not be able to afford the much-higher true price and will thus fail. One current example is tellurium, which currently costs around $150 per kilogram and exists with ~1ppb abundance in the crust. For comparison, consider platinum, which costs around $150,000 per kilogram despite its ~4ppb crustal abundance. Why is tellurium so cheap? It turns out tellurium is a byproduct of the electrolytic refining of copper, and the large market for copper keeps the supply of the tellurium byproduct sufficient to meet current global demand.

Other risk factors for ECE availability include environmental and social concerns—the refinement of ECEs (e.g., rare-earths) is often a highly destructive process involving unpleasant chemicals, which could make ECE availability subject to environmental policy—and long response times for extraction—it typically takes 5-15 years to bring new mines online, which may be too slow to keep up with the deployment of novel energy technologies.

So what can we do about it?

Large-scale coordination by the government is needed to attack so complex a problem as energy-critical element availability. Providing reliable and up-to-date information on the availability of ECEs to researchers and investors will go a long way toward improving the current situation: With sufficient information, we can shift research efforts toward energy technologies with ECE needs that coincide with ECE availability.

Another potential response is to increase efforts to recycle ECEs. Recycling all ECE-containing products could reduce our dependence on new resources. Consider cell phones: Modern mobile devices contain 40 or more chemical elements—the majority of known radioactive-stable elements—and most end up in the back of desk drawers at the end of each 2-year contract cycle. But recycling isn’t a feasible option when considering any growth in the market size, much less exponential growth. Assuming the same efficiency of use over time—e.g., the same amount of Te will be needed to produce a CdTe solar cell with a fixed power output now and 20 years from now—recycling can never keep up with increasing material demands, even with 100% recycling efficiency.

The take-home message: Many new energy technologies rely heavily on a subset of chemical elements (e.g., He, Li, Te, rare-earths). These “energy-critical elements” (ECEs) are not currently produced in large quantities, and thus their future availability is highly unpredictable and dependent on complex economic, environmental, and geopolitical factors. A shortage of these elements could inhibit the large-scale deployment of promising solutions to the world’s energy needs. We need more people and more money dedicated to identifying potential substitutes, informing researchers and the public about ECE issues, and improving the efficiency with which we extract, use, and reclaim these elements.

I was asked to answer a question on Quora about grad school and preparing for a career in photovoltaics and device engineering—presumably because I’m going to grad school and preparing for a career in photovoltaics and device engineering—and I thought the question and answer might be helpful for those considering going to grad school in engineering.

Here’s the question and context:

How do I choose a graduate program and prepare for a career in solid-state device engineering?

I have a B. Sc. in Electrical Engineering and I would like to work with photovoltaics / solid state device physics. My undergraduate degree is not quite enough to let me work in that field outright. So I’m looking to do a graduate degree.

I applied for a 2-year M. Sc. in Physics program and I was assessed for 2 years’ worth of bridging subjects, for a total of 4 years of study. I think that 4 years is quite a long time. The good thing is that I’ve been talking to a professor who does condensed matter physics and photovoltaics and he’s willing to let me join his group.

On the other hand, I have an option to do a 2-year M. Sc. in EE in the field of Microelectronics or Power Electronics. Which one will be a good way to bridge into photovoltaics?

At this university, the Physics department is the more prolific publisher of research output, both locally and internationally. Not that I’m super rich (or else I wouldn’t be asking this question), let’s take the issue of finances out of the equation. Let’s focus on the time investment (I’m 25) and academic learning benefits.

Time-wise, I’m inclined towards EE; but personally, Physics is more appealing to me. Short term, I’d like to know (with an M. Sc. in Physics) if I can compete with microelectronics engineers for solid state device engineering jobs. Long term, I’d like to do a PhD (for which I’ll need publications to get into a program) in photovoltaics. My professional outlook right after finishing my M. Sc. is that I’ll need to work for a while first before I can proceed to do my PhD. An industry job is preferable since it usually pays more. On the subject of publications, I will have achieved that during my stint in the M. Sc. program.

Conversely, I think that doing the Microelectronics track would let me focus with just the necessary training for solid state device physics and do away with the unnecessary physics topics. I would also have a wider range of career choices, not just in photovoltaics.

What are your thought processes when faced with a dilemma like this? What other factors do you consider?

And here’s my answer:

Simple answer: Go with EE.

Let me explain.

Consider these questions:

“Do I want to go to grad school?”

For you, the answer is clearly “Yes.” But if it’s not 100% clear, stop now and think hard.

“Masters or PhD?”

It sounds like you want to pursue a masters degree now and a PhD eventually. Keep that in mind.

“Do I want to go into industry or academia?”

When you’re deciding whether and where to go to grad school, pondering the industry vs. academia fork in the road will guide your decision and give you a lot of insight into your own ambitions. If you want to go the academic route, I strongly suggest pursuing a PhD as soon as possible—jointly with or immediately after your MSc. But from your question, it sounds like you’re preparing for an industry career in device engineering rather than academic research.

“Where do I want to be in 10 years?”

Suppose in a decade from now you want to be doing innovative engineering work in the photovoltaics or microelectronics industry.

“How do I get there?”

Work backwards.

How many years of industry experience do I need before I can reach my goal? As many as possible. It can take the better part of a year to get acclimated and truly integrated in a new work environment, be it company or school, and it’s hard to innovate before you know the existing system and the current state of the art.

What academic background do I need? At least a couple terms of related engineering coursework beyond the BSc level. Preferably the experience with cutting-edge research that accompanies PhD-level work in any science or engineering discipline.

How long will it take to get a PhD? Around four years (after the MSc).

How long will it take to get a MSc? Two to four years, in your case.

Simple math gives you 10 – 4 – (2 to 4) = AMAP (as many as possible).

Simple math tells you to choose the 2-year masters program in EE.

“Am I committed to getting a PhD?”

If there’s a chance that you might stop after the masters and forgo a PhD—and that’s quite likely if you enter a 4-year MS-only program—go for a masters in engineering, not physics. A masters degree alone in physics is often considered to be impractical at best and useless at worst. Although physical intuition is extremely valuable, you’ll end up taking a lot of required classes that would be useful for academic research but not-so-useful for engineering in industry. The key realization is that if your ultimate goal is to work in engineering, you should work in engineering environments (e.g.,, academic or industry research labs) as much as possible. Sure, classes are invaluable preparation, but extra classes often yield diminishing returns while extra engineering experience yields increasing returns, at least at these time scales. Given a fixed amount of time in grad school, then, minimize the length of your MSc program in favor of the PhD.

This line of reasoning suggests that if you’re committed to following through with the PhD, it might be logical to pursue a MSc in physics first. But in your case, however committed you may be, that still may not be true. Those two extra years of “bridging subjects”—and tuition payments—are a deal-breaker.

***Caveat: If you can stretch that MSc in physics into a PhD with the same group (i.e., overlap the 4 years of MSc classes with the ~4 extra years for the PhD, for a total of ~6-7 years)—AND you’re committed to working in photovoltaics—go for it and don’t look back.

“Did I choose the right field?”

If you’re going to do research and work in photovoltaics eventually anyway, does it matter? The only difference this makes in a grad student’s life is where you turn in your forms and where you get your free food. And in practice, there’s very little difference between solid-state physics and EE semiconductor device physics. In either case, you can and will take classes in quantum physics, statistical mechanics, and solid-state, and as long as you find a research advisor working in photovoltaics or a related area, you’ll get the experience you need to be successful in the field. Research groups in solid-state devices are often highly interdisciplinary anyway: My group in the MIT EECS department has students and researchers from EE, physics, materials science, chemical engineering, chemistry, and mechanical engineering.

“Which area will best prepare me for a career in photovoltaics: Microelectronics or Power Electronics?”

Microelectronics. Like photovoltaics, micro/nanoelectronics is deeply rooted in semiconductor device physics, and you’ll find that many processing technologies and techniques are shared between the two fields. That said, if you want to work on developing utility-scale photovoltaic systems, taking some power electronics classes would be very useful.

***Here are a couple other things to keep in mind as you decide your future:

1) I don’t believe that you need to work in industry after your MSc before you can start on your PhD.

I went straight into a MS/PhD program in EE immediately after graduating from undergrad. Many grad programs in EE and other engineering disciplines have combined MSc/PhD programs—less so in physics—so pursuing both at once would save you a round of applications and up to a year of total time to graduation. But if getting admitted to PhD programs directly is a concern, consider applying to a MSc program that offers the possibility of continuing on for the PhD (e.g., by taking qualifying exams or petitioning). At many schools, it’s easier to stay in than to get in.

If you don’t apply to grad school while you’re still in school, it will be difficult to get the required recommendation letters from professors—note that letters from professors are the most important part of your application and carry much more weight than letters from engineers or managers in industry. Besides, you can often do internships if you want industry experience.

Many engineers in industry have told me that it’s very difficult to go back to school (for a PhD) after working for a while—you get used to a certain lifestyle (e.g., predictable work schedule, weekends off, no classes, a solid paycheck) that you won’t be able to maintain as a grad student. And once you get married and have a kid or two running around the house, it will become even more difficult to go back to school.

2) I think it’s incredibly valuable for anyone involved in science and engineering—both in industry and in academia—to be exposed to the microelectronics industry and Moore’s Law (the self-fulfilling prophecy driving transistor density in integrated circuits to double every two years). The former touches nearly every aspect of our lives today, and the latter represents a historical upper limit on the time derivative of innovation—pure exponential growth for 4 decades. And although very few (if any) other sectors have growth potential anywhere near that afforded by transistor scaling, I can think of no industry that would not benefit from the relentless driving force of a Moore-esque imperative.

OK, not quite. But I did contribute a tiny bit to my research group’s efforts to develop a new type of solar energy converter that could make a big difference in the way we create and consume energy.

I spent most of this summer working in a multidisciplinary research group under the Stanford EE Department’s Research Experience for Undergrads (REU) program. Our work focused on a new solar energy harvesting concept called Photon Enhanced Thermionic Emission (PETE) and dreamed up by Nick Melosh, a MatSci professor at Stanford. I can’t go too much into the details now, since the seminal paper is yet to be published, but PETE holds a lot of potential as a novel source of low-cost renewable energy because unlike traditional PV (solar) cells, which quickly lose efficiency at high temperatures, PETE actually gains efficiency with increasing temperature, feeding off the heightened thermal energy to aid photoemission. As a result, we can combine the PETE device with a solar thermal converter––which, as a heat engine, can only run efficiently at elevated temperatures––and realize some absurdly high theoretical conversion efficiencies. For those familiar with solar cell operation, PETE can beat the Shockley-Queisser limit by taking advantage of below-bandgap photons and heat energy from hot-carrier thermalization.

Anyway, it turns out PETE, as well as many other optoelectronic devices, can get a pretty significant photoemission efficiency boost from the use of semiconductor nanostructures, like nanowires. For that reason, I spent 10 weeks this summer building a Monte Carlo simulation to characterize electron dynamics in nanowires, to help us better understand how electrons behave under various material conditions at nanoscale dimensions. My post-doc mentor, Igor, created the basic framework and helped me build and test the simulation. I ended up with some pretty cool results. I reproduced the negative differential resistance phenomenon in GaAs and matched the experimental scattering rate data surprisingly accurately. The graphic below is a visualization (created in Mathematica) of a single electron trajectory in a GaAs nanowire.

The lucky electron is injected at the solid black ball and bounces around for a while under the influence of probabilistic scattering mechanisms, gaining kinetic energy (shown as a black-to-red gradient), and finally escapes into free space at the solid red ball.

I got really lucky this summer, with a great mentor who wanted me to learn and a meaningful project in a high-potential field that might have shifted my entire academic and career trajectory toward grad school and solar energy research. That said, I’m still exploring other interests, and entrepreneurship still holds a fundamental appeal to me, so who knows where that combination will lead me? At the end of the summer, I got to give a couple presentations, one to my lab group and one to the entire REU program, advisors, and guests. I had a good time with both, and I’m excited to keep working on the PETE project as the new school year starts.

One of the greatest things about research, especially engineering research, is the flexibility that you often have with your work environment. Maybe it’s because they didn’t want to waste precious desk space in Allen on me, but I ended up working from my dorm, from the library, and from just about anywhere else on campus with an internet connection (and at Stanford, that’s pretty much everywhere). I could, and often did, wake up at 10PM and still get more done than a 9-to-5er by working on my own schedule, at times when I was most efficient, including sometimes late into the night. The 8-hour workday and Monday-to-Friday workweek simply didn’t exist––I might work 13 hours one day, 6 the next, a few hours here and there on a Saturday––but when something needed to be done, I got it done. If a friend needed a 4th man to fill out a beach volleyball team, I was there. And I still found time to read a couple books, go to the beach with friends, keep up my running, and have the summer of a lifetime. And although the task may be harder, the prospect of starting my own company holds a similar allure. After all, when you truly care about and believe in the meaning of your work, why wouldn’t you want to spend as much time with it as it takes to succeed?