Electricity is a high quality form of energy. It can carry vast amounts of energy across continents or allow the fine grained control of information. The high quality nature of electricity makes it highly substitutable. Electricity allows decarbonization of transport and heating. Low carbon electricity can power cars or to heat our homes.

Unfortunately, low carbon electricity resources are themselves not easily substitutable. Different low carbon resources offer fundamentally different value to the grid. To use one resource in place of another is possible, but ends up being more expensive the more substitution you do.

Jenkins reports that progress in variable renewables is roughly on track, but progress in flexible base resources is behind. Mature technologies such as nuclear or CCS are largely stagnant. Newer technologies such as underground storage or electricity to gas are unproven on the scales required to make a real difference.

The troubles of nuclear in the West are well documented. Today large scale nuclear plants are only being built in planned economies (i.e. China & the Middle East). Jenkins notes that investors struggle to handle the large absolute values of capital required to build nuclear in market based economies, even if the $/GW of a project is attractive.

Today’s low interest rates should make investing in nuclear more attractive, yet nuclear in the West is reducing in capacity. The West has largely forgotten how to build nuclear on time and on budget, with South Korea becoming the world leader in nuclear builds.

Smaller nuclear plants can help make the investment easier to swallow. Smaller plants also reduce construction and financing risks, and allow manufacturing to be done in factories (rather than be assembled on site).

Variable renewables peak

Jenkins shows the optimal relative mixture of low carbon resources as a function of carbon limits. Figure 1 below shows that initially it is optimal to focus on increasing the relative share of variable renewables on the grid.

It also shows that later on the grid requires other types of resources to deliver the lowest cost electricity supply. Eventually the costs of operating a variable renewable grid become so large that more expensive but dispatchable low carbon generation (i.e. nuclear) becomes the optimal economic decision.

Figure 1 – It’s not a straight line to zero

This means that we must continue to develop, support and improve fast-burst and flexible low carbon resources alongside variable renewables. Not because we need them today but because we need them tomorrow.

Value versus cost

Jenkins explains the historical mental model for supporting clean technology. Subsidies allow a technology to reduce its costs and move up it’s experience curve. The path often includes developing economies of scale (i.e. in manufacturing or supply chains) and accumulation of iterative ‘learning by doing’ improvements. Once the cost of the technology is low enough then the technology can stand on it’s own two feet without subsidy.

The new mental model is that as renewable penetrations increase we see both decreases in cost and a concurrent decrease in value. Because renewables generate at the same time, the oversupply during these times drives down electricity prices. This driving down of prices is a signal that the grid isn’t valuing generation during these time periods. If the value of electricity generated by renewables is very low then even very cheap plants won’t get built.

Diversification offers the lowest cost decarbonization

In the context of the type of optimization models Jenkins develops this makes complete sense. Introducing additional constraints onto a linear program will only ever end up with the same or a worse result.

Not allowing nuclear limits us to an equal or worse off situation than allowing nuclear. If a nuclear-free pathway was optimal then both models will find the same solution. The fact is that they don’t. Jenkins finds a strong consensus in literature that a diversified mix of low carbon resources offers cheapest deep decarbonization. Dispatchable low carbon resources significantly reduce the cost and technical resources of deep decarbonization.

Not using nuclear would require a massive (roughly double) build out of variable renewables capacity. Not only do we need more capacity but the global capacity factor [MWh / MWh maximum] will be low. The capacity will also have a very high cost per utilised output [$/MWh]. The low energy density nature of variable renewables also means greater land use impacts for a variable renewables scenario only solution.

A variable renewables only solution also requires long duration energy storage. In a review of literature, Jenkins finds a seasonal storage requirement for 8-16 weeks worth of US electricity consumption. To put this in context – the ten largest pumped hydro plants in the US currently have around 43 minutes of storage.

All is not well with the clean energy transition. Positive commentary on the progress of the transition is frustrating. 2017 saw a 2% rise in global carbon emissions, and the concentration of CO2 has surpassed 400 ppm for the first time in several million years.

The motivation for the clean energy transition cannot be stronger – preventing dangerous climate change. Yet even the potential violence of climate change is not counteracting historical realities.

This post highlights four reasons why the clean energy transition is failing. All four reflect the experience of previous transitions. All four are unwelcome.

I’m not arguing against the need for the clean energy transition. It’s something that has to happen a lot faster. I’m showing some of the truths behind why this transition is (and will continue to be) difficult. Only by understanding these historical realities we can take action to counteract them.

The primary source for these ideas is Vaclav Smil’s excellent work on energy transitions (book, lecture and another lecture). Smil is my favorite energy writer – prolific, confidentially numeric and intelligently contrarian. Smil’s work is for me the best on energy ever written. I’ve also previously written about Smil’s work on carbon capture and storage.

The Four Inconvenient Truths of Energy Transitions

Energy transitions are key to the development of civilization. Muscle and wood powered our early days – today we burn coal and gas to drive turbines, oil to drive cars and can harness energy that powers stars.

Now we are moving toward clean and smart technologies – wind turbines, solar panels, energy storage and intelligent operation. This transition is both very similar and very different to past transitions.

Three of the four inconvenient truths are how this transition is like the pastone – energy transitions are slow (and getting slower)
two – energy transitions are additive (old fuels don’t go away)
three – energy transitions are sequential and high variance (especially on small scales)

The fourth truth is one in which the clean energy transition is departing from previous transitionsfour – energy transitions enable new utility

There are two reasons for the slowing down. As the absolute size of our energy consumption increases, the relative effect of adding more is smaller. The massive growth in global energy consumption means that effort today has a smaller relative effect than it would have in the past.

Second, the technical challenges of using the new energy source increase. Moving from wood to coal was a reasonably easy transition – both are solid fuels that can be transported, handled and burnt using similar techniques. Using oil required building a massive global upstream and downstream infrastructure, cars and roads to drive on. Using gas required the development of gas turbines – one of the most complex machines humanity has ever created.

The clean energy transition is full of technical challenges. Clean energy generation is low power density (W/m2) – meaning we need to build wind & solar across vast areas of land. It also requires transmission lines, energy storage and intelligent operation, to counteract the disadvantages of geographically dispersed, low capacity factor and intermittent renewables.

What this means for the clean energy transition – it’s going to take a long time.

The Second Inconvenient Truth – energy transitions are additive

Of the four truths, this is the most inconvenient. As civilization progresses we increase both the amount and quality of energy we use. But each time we transition we don’t replace old energy sources – we add new energy sources on top. Older technologies take a long time to go away. We are still building coal-fired generators today and are recklessly likely to for a long time.

Figure 1 below shows the history of US primary fuel consumption. Note how US coal consumption has continued to rise all the way through to the start of the 21st century. Each energy transition has not displaced coal. Instead newer fuels add to existing coal consumption. Coal consumption has also continued to increase.

As global energy demand increases, renewables be a significant part of the marginal increase. But it’s the older fossil fuel generation that needs to go – history shows us that this doesn’t happen quickly. One reason for this is that the economics of technology improves as it matures.

Improvements in core technology, building of supply chains and know-how mean that older technologies are often efficient, cheap to build and cheap to maintain. A technology having a track record of performance and lifecycle cost also makes it more attractive for investors.

Diesel generators (an 1890’s technology) are a great example of this. Diesel generators are reasonably efficient, quick and cheap to build with a well-understood maintenance schedule.

The dependence doing the right things in the right order means progress is not guaranteed. Coal dominated China, nuclear powered France and hydro blessed New Zealand show that energy systems evolve very differently.

When we have specific requirements about where our energy system needs to go, getting what we want requires getting things right all across the board.

The inevitability of technological progress in the economics of wind & solar is sometimes confused with the inevitability deploying wind & solar. The reality is that even as clean technology improves, there is no guarantee that our energy system will decarbonize.

There are a multitude of other, equally important things that need to happen. For example, without the correct alignment of incentives through rate structures even very cheap batteries won’t have an impact. We cannot only rely on technology improving. Without everything else in the right place at the right time we won’t get where we need to be.

What this means for the clean energy transition – there is no guarantee things will move in the correct way.

The Fourth Inconvenient Truth – energy transitions enable new utility

The first three truths are ways in which the clean transition will be like the past. The final truth is a reality of the clean energy transition that moves against past trends.

Why do we spend the time and money to transition to new energy sources? New sources of energy allow us to do things we couldn’t do before.

Coal enabled a revolution in manufacturing, oil & gas enabled revolutions in transportation. Energy transitions enable new utility by using higher quality fuels. One measurement of energy quality is energy density – how much energy we can squeeze into a given mass or volume.

It is interesting that historical transitions have taken us from solids to liquids to gases. Volumetric energy density (MJ/m3) can be as important as energy density on a mass basis (MJ/kg). It’s difficult to compare renewables with fossil fuels on an energy density basis as renewables don’t consume fuel. Yet it is evident that water, sunlight and wind are less dense forms of energy than burning oil or gas.

Even if we ignore that clean technologies reverse the energy density trend, the electricity generated by clean technologies is still the same as what a gas turbine generates today. The clean energy transition lacks a killer app.

We aren’t getting any major new form of utility – only a cleaner version of what we already have. The cleaner nature of wind & solar are still worth working and paying for. But we are without a key driving force that helped to power previous transitions – the driving force of people wanting to heat their homes, power factories and fly around the globe.

The only thing that comes to mind is the role that inverter based renewables & storage can play using grid services such as fast frequency response. It’s becoming clear that inverters are actually superior to synchronous generators in providing these services. However this is a minor advantage compared to the coal fired revolution of manfuacturing and the oil fired revolution in mobility.

What this means for the clean energy transition – a key driving force that powered previous transitions won’t be helping this time.

I’m not arguing against the need for the clean energy transition. The need is urgent. The purpose of this post is to shine light on reality.

Only by acknowledging reality can we overcome the inconvenient and unwanted realities of energy transitions.

Part of this is driven by misaligned incentives. Even with aligned incentives dispatching a battery is still challenging. Getting batteries supporting the grid requires progress in multiple areas:
– decreasing total system installation costs
– aligning incentives with prices that reflect the value of the battery to the grid
– intelligent operation

This work supports operating battery storage with intelligence.

reinforcement learning in energy

Reinforcement learning can be used to intelligently operate energy systems. In this post I show how a reinforcement learning agent based on the Deep Q-Learning (DQN) algorithm can learn to control a battery.

It’s a simplifed problem. The agent is given a perfect forecast of the electricity price, and the electricity price itself is a repetitive profile. It’s still very exciting to see the agent learn!

In reality an agent is unlikely to receive a perfect forecast. I expect learning a more realistic and complex problem would require a combination of:
– tuning hyper parameters
– a higher capacity or different structure neural network to approximating value functions and policies
– more steps of experience
– a different algorithm (AC3, PPO, TRPO, C51 etc.)
– learning an environment model

The agent and environment I used to generate these results are part of an open source Python library. energy_py is a collection of reinforcement learning agents, energy environments and tools to run experiments.

the agent – DQN

DeepMind’s early work with Q-Learning and Atari games is foundational in modern reinforcement learning. The use of a deep convolution neural network allowed the agent to learn from raw pixels (known as end to end deep learning). The use of experience replay and target networks improved learning stability, and produced agents that could generalize across a variety of different games.

The initial 2013 paper (Mnih et. al 2103) was so significant that in 2014 DeepMind were purchased by Google for around £400M. This is for a company with no product, no revenue, no customers and a few employees.

The DQN algorithm used in the second DeepMind Atari paper (Mnih et. al 2015) is shown below.

In Q-Learning the agent learns to approximate the expected discounted return for each action. The optimal action is then selected by argmaxing over Q(s,a) for each possible action. This argmax operation allows Q-Learning to learn off-policy – to learn from experience generated by other policies.

Experience replay makes learning more independent and identically distributed by sampling randomly from the experience of previous policies. It is also possible to use human generated experience with experience replay. Experience replay can be used because Q-Learning is an off-policy learning algorithm.

A target network is used to improve learning stability by creating training Bellman targets from an older copy of the online Q(s,a) network. You can either copy the weights over every n steps or use a weighted average of previous parameters.

One of the issues with Q-Learning is the requirement of a discrete action space. In this example I discretize the action space into 100 actions. The balance with discretization is:
– too low = control is coarse
– too high = computational expense

I use a neural network to approximate Q(s,a). I’m using TensorFlow as the library to provide the machinery for using and improving this simple two layer neural network. Even though I’m using the DQN algorithm I’m not using a particularly deep neural network.

I make use of relu’s between the layers and no batch normalization. I preprocess the inputs (removing mean and scaling by standard deviation) and targets (min-max normalization) used with the neural network using energy_py Processor objects. I use the Adam optimizer with a learning rate of 0.0025.

The network has one output node per action – since I choose to discretize the action space with 5 discrete actions for each action, there are 10 total discrete actions and 10 output nodes in the neural network.

There are a number of other hyperparameters to tune such as the rate of decay of epsilon for exploration and how frequently to update the target network to keep learning stable. I set these using similar ratios to the 2015 DeepMind Atari paper (adjusting the ratios for the total number of steps I train for each experiment).

the environment – battery storage

The battery storage environment I’ve built is the application of storing cheap electricity and discharging when it’s expensive (price arbitrage.) This isn’t the only application of battery storage – Tesla’s 100 MW, 129 MWh battery in South Australia is being used for fast frequency response with impressive results.

I’ve tried to make the environment as Markov as possible – given a perfect forecast enough steps ahead I think battery storage problem is pretty Markov. The challenge using this in practice comes from having to use imperfect price forecasts.

The state space for the environment is the true price of electricity and the charge of the battery at the start of the step. The electricity price follows a fixed profile defined in state.csv.

The observation space is a perfect forecast of the electricity price five steps ahead. The number of steps ahead required for the Markov property will depend on the profile and the discount rate.

The action space is a one dimensional array – the first element being the charge and the second the discharge. The net effect of the action on the battery is the difference between the two.

The reward is the net rate of charge or discharge multiplied by the current price of electricity. The rate is net of an efficiency penalty applied to charging electricity. At a 90% efficiency a charge rate of 1 MW for one hour would result in only 0.9 MWh of electricity stored in the battery.

results

The optimal operating strategy for energy storage is very application dependent. Given the large number of potential applications of storage this means a large number of optimal operating patterns are likely to exist.

The great thing about using reinforcement learning to learn these patterns is that we can use the same algorithm to learn any pattern. Building virtual environments for all these different applications is the first step in proving this.

further work

Building the energy_py library is the most rewarding project in my career so far. I’ve been working on it for around one year, taking inspiration from other open source reinforcement learning libraries and improving my Python & reinforcement learning understanding along the way. My TODO list for energy_py is massive!

Lots of work to do to make the DQN code run faster. No doubt I’m making silly mistakes! I’m using two separate Python classes for the online and target network – it might be more efficient to have both networks be part of the same object. I also need to think about combining graph operations to reduce the number of sess.run(). Prioritized experience replay is another option to improve sample efficiency.

Less Markov & more realistic state and observation spaces – giving the agent imperfect forecasts. Multiple experiments across different random seeds.

Test ability to generalize to unseen profiles. This is the most important one. The current agent has the ability to memorize what to do (rather than understand the dynamics of the MDP).

In 2017 the US deployed around 700 MWh of battery storage. This corresponds to around 300 MW. This is a useful ratio to understand – the ratio of capacity to rate of charge/discharge. I think a useful rule of thumb could be around 2:1 (capacity : rate).

Smart meter deployment is sitting at around 50% in the US. Unlocking value of these smart meters requires smarter pricing (i.e. time of use).

Battery storage penetration depends both on technology and regulation. Regulation needs to support cost reflective and variable pricing.

Disappearance of gas peaking plants hurts gas turbine suppliers more than gas suppliers. Peaking units don’t consume a large amount of gas, they do consume a large amount of capital.

Importance of considering total project costs for batteries. Cost of battery storage technology is decreasing faster than balance of plant costs.

I’ve given this course to three batches at Data Science Retreat in Berlin and once to a group of startups from Entrepreneur First in London. Each time I’ve had great questions, kind feedback and improved my own understanding.

I also meet great people – it’s the kind of high-quality networking that is making a difference in my career. I struggle with ‘cold networking’ (i.e. drinks after a Meetup). Teaching and blogging are much better at creating meaningful professional connections.

I’m not an expert in reinforcement learning – I’ve only been studying the topic for a year. I try to use this to my advantage – I can remember what I struggled to understand, which helps design the course to get others up to speed quicker.

Below I model the economics of a worker displacement project that works for everyone. To combat inequality, the business and employee both share the project saving. I will show that by sharing the project saving both the business and former employee can end up with acceptable outcomes.

By worker displacement projects I mean any project where the employee loses his job. Automation and artificial intelligence projects are both worker displacement projects.

The model below is simple – this is a good thing. A model like this is designed to provoke thought and hope in the reader.

I first assume an annual savings breakdown of the worker displacement project. We are displacing an employee that costs the business $40k. This includes tax that is not paid to the employee and any marginal expenses associated with employment.

I then assume a small maintenance cost increase for the business and a saving from efficiency improvements. All three of these net out at an annual saving of $50k for the business from this project.

If we decide to share this saving 50/50, the business ends up only saving $25k. This will double the project payback period. As we expect automation or AI projects to have decent paybacks (i.e. 2 years or less) we would expect the new payback period to be at most 4 years. This is still likely to be an acceptable use of capital. It depends on variables such as interest rates and alternative projects the company could finance.

The net financial impact for the employee is more complex than just the lost wages. We would also expect a small decrease in expenses that occur related to work. Our employee also receives a share of the saving from his old employer.

The net result is no financial impact for the employee from being displaced by a machine. The business is left with a project that while not as attractive as it could be, is still acceptable for many business as a use of capital. Both sides end up with acceptable outcomes.

A key assumption here is the breakdown of the project savings. Technology will improve the ratio of maintenance costs to efficiency improvements. Efficiency improvements should increase as the projects enable more machine intelligence (rather than pure automation based on human heuristics).

We could also see reductions in machine maintenance costs. Alpha Go Zero showed an impressive decrease in computation costs over it’s previous iteration. It would be reasonable to expect that machine O&M costs will decrease over time.

The point of this analysis is not to show exactly zero net impact. It could be possible that the employee would need to accept a small decrease in net income. Any impact needs to be offset against the non-financially quantifiable benefits and drawbacks that also occur when a worker is displaced.

It’s not clear whether the non-financial impacts would be be net positive or negative. Having more choice over how you spend your time might be offset by the lack of intellectual or social stimulation we get from our work today.

The specific mechanism for value sharing requires though. The real mechanism for sharing the saving will be complex to implement in the real world. One mechanism would be a universal basic income funded by taxes on projects that displace workers. This would most likely be a tax on the capital cost, as quantifying savings would be more challenging.

What I am trying to show is that it is possible to share value, rather than default to the business taking all of the value of the project and leaving the employee without any significant source of income.

The capitalist default of today is not acceptable due to the inequality it creates.

We must share the benefit of automation and machine intelligence throughout society. The key to doing this is to balance between an acceptable return on capital on the business side with quality of life of society.

This post looks at Peter Diamandis’ talk Demonetizing Everything: A Post Capitalism World. The central premise of the talk is demonitization – technology is making utility cheaper.

Diamandis’ highlights demateralization as one driving force behind demonitization. Put simply – technology allows us to use less stuff to deliver more utility, making that utility cheaper.

Diamandis gives a great example of this demateralization leading to demonitization trend using the smart phone. Diamandis estimates that the functionality of a $50 smart phone of today would have cost millions 20 years ago. This is a direct result of the demateralization of functionality from hardware to software.

Yet when Diamandis got to his section on energy I was left quite frustrated. It’s not that I don’t agree with the central premise that demateralization doesn’t lead to demonitization. It does. I’m disagreeing that demateralization is occurring in our transition to renewables.

Because the energy density of renewable resources are so much less than fossil fuels, we actually require more steel, concrete and plastic per unit energy generated from wind & solar.

I also found talk too positive – Diamandis makes it sound like everything is OK, coal has been defeated and it’s all smooth sailing from here to a clean & decarbonized world. While we are making progress it is too slow – and carbon emissions are still rising.

Below I have a look at a few of the points Diamandis raised in more detail.

2016: Renewables Cheaper Than Coal

World Economic Forum Reports: Solar and wind now the same price or cheaper than new fossil fuel capacity in over 30 countries. Energy experts think coal will not recover.

You only need to look at one of the figures from that exact report (Figure 1 below) to see that it’s still happy days for coal. Coal is still by far the dominant fuel globally – does it really look like wind & solar have dealt a killing blow from which coal will never recover?

It also doesn’t matter if wind & solar are cheap if we aren’t installing more capacity. The same WEF report shows that total investment ($USD billion per annum) in renewables has levelled off since 2011.

Due to the price decrease we still will be installing more renewables at the same level of investment, but it’s the combination of price and investment that gives us what we really care about – annual capacity installed.

I also find the use of ’30 countries’ a misleading use of statistics. Are these 30 small, sunny countries which are perfect for solar? If the lessons learnt in these 30 countries don’t transfer to China, India and the USA then it doesn’t really matter in terms of fighting climate change.

It would be more relevant to look at in how many countries solar was cheaper than coal, then to weight each country by population or total energy consumption. This would give a more accurate picture of how solar is doing in displacing coal. More accurate still would be to look at Figure 1.

The Global Status Report for Renewables states that renewable energy now accounts for 25% of the world’s power

The problem here is including all renewables together. The distortion comes from including hydropower with all other renewables.

The exact numbers here aren’t important – what’s important is that the viewer of the presentation is left thinking that renewables are doing fantastic when in fact wind & solar still make up a very small portion of global generation. The fight is not over – in fact we aren’t even winning.

Costa Rica operating on 100% renewables for over 300 days

I actually already addressed this misconception in an earlier post (Composition, not consumption). Costa Rica is lucky to have a very high penetration of hydroelectricity (around 80%). Hydroelectric dams have free energy storage built in – this allows the grid to easily deal with the intermittency of renewables.

Most countries do not have the luxury of a large hydro resource, so using Costa Rica as an example of how close we are to going 100% renewable globally is misleading. We require different techniques and technologies to decarbonize the rest of the world.

Efficient use of energy must be the logical first step for anyone trying to slow carbon change. The benefits of not wasting energy are so evident that it should be a high priority for our civilization. Unfortunately it’s not quite that simple.

1865’s The Coal Question introduced what we now call Jevon’s paradox – that technological progress in the efficiency of using a resource leads to increases in resource consumption.

Jevon’s paradox is an inconvenient truth for energy efficiency. It’s not that efficiency doesn’t work – we do use less primary energy per unit of utility. It’s what happens afterwards where the gains in efficiency are cancelled out by more global effects.

Lets look at some of the possible first, second and third order effects (thanks Ray Dalio for this mental model). We will use gas fired heating as the example.

The first order effect of improving heating efficiency is that less gas is required to supply the same amount of heat. This effect is positive – we don’t burn as much gas to provide the same utility.

A secondary effect of improving heating efficiency could be that we now get more heat for the same amount of money. We spend the same amount, we get more heat – but no carbon saving. We can afford to heat bigger homes for the same amount of gas.

A third order effect could be that increased efficiency leads to less gas consumption – meaning saved carbon and money. The question is then what does the economy do with the saved money?

If the saving is spent on taking a long haul holiday, we could actually see an increase in global carbon emissions. We improve the efficiency of supplying heat but overall as a civilization we burn more carbon. Alternatively if the saving is spent on building cleaner energy generation then even increases in utility could lead to a carbon saving.

It’s very difficult to generalize on what effect Jevon’s paradox has across different consumers, economics and technologies. Measuring the first order effects of energy efficiency projects is notoriously difficult – let alone any second or third order effects.

It’s important to note that energy efficiency is still worthwhile. It allows economic progress – this alone is worth doing. Yet for someone purely concerned with decarbonization, energy efficiency may not be the correct first option.

Jevon’s paradox is not guaranteed to occur. Any negative second or third order effects of energy efficiency can be smaller than the efficiency saving. It perhaps suggests that focusing on making sure any energy we use comes from as clean a primary source as possible is a safer bet than trying to use less dirty energy.

Perhaps you’ve had critics of the energy transition shout “inertia” at you. Perhaps it’s keeping you up at night. Is our dream of a clean energy future impossible? This article will reassure you that losing inertia is something clean energy technologies can deal with.

We are transitioning to a very different electricity system. We are building small-scale, clean and asynchronous generators. Wind turbines that spin at variable speeds, much slower than synchronous generators. Photovoltaic solar panels and batteries have no moving parts at all.

A key difference between these two systems is the inertia of the generators. Fossil fuel generators posses a lot of inertia due to the rapidly spinning & heavy turbine connected to the alternator. Once the turbine is spinning it’s hard to get it to stop – in the same way that it’s hard to stop a truck traveling at speed.

The speed at which the shaft & alternator needs to spin at is directly proportional to the desired grid frequency. In fact the grid frequency is the result of the speed that all these synchronous generators spin at. The frequency of electricity generated by a synchronous generator is given by

The grid is an interconnected system – changing grid frequency requires changing the speed of every generator connected to the grid. This interrelationship becomes useful during times of supply & demand mismatches. Any imbalance needs to work to change the speed at which every generator on the grid spins. If these generators posses a lot of inertia, then the imbalance needs to work harder to change the grid frequency.

This is the value of inertia to the grid – it buys the grid operator time to take other actions such as load shedding or calling upon backup plant. These other actions are still needed – inertia won’t save the grid, just buy time for other actions to save the grid.

So now we understand that fossil fuel generators have inertia and how it is valuable to the grid (it buys the system operator time during emergency events). What does this mean for our energy transition? Do we need to keep around some fossil fuel generators to provide inertia in case something goes wrong? The answer is no.

Modern wind turbines can draw upon kinetic energy stored in the generator and blades to provide a boost during a grid stress. This ‘synthetic inertia’ has been used successfully in Canada, where wind turbines were able to supply a similar level of inertia to conventional synchronous generators.

Photovoltaic solar and batteries also have a role to play. Both operate with inverters that convert DC into AC electricity. The solid-state nature of the devices means that they operate without any inertia. Yet this solid-state nature allows inverters the ability to quickly change operation in a highly controllable way. Inverters can quickly react to deliver whatever kind of support the grid needs during stress events.

Clean technologies are ready to create a new electricity system. Now we need to make sure we incentivize the technology that our grid needs. Market incentives should support technologies that can supply inertia on our cleaner grid. The level of support could be logically set so that the level of inertia on the grid will remain at the same level as our old fossil fuel based grid. That way, no one can complain.

1 – Setup

Regarding Python 2 vs Python 3 – if you are starting out now it makes sense to learn Python 3. It’s worth knowing what the differences are between the two – once you’ve made some progress with Python 3.

The installation process is pretty straight forward – you can check that Anaconda installed correctly by typing ‘python’ into Terminal or Command Prompt. You should get something like the following:

2 – pip

pip is a way to manage packages in Python. pip is run from a Terminal. Below are the pip commands I use the most.

To install a package (Note that the -U argument forces pip to install the upgraded version of the package)
pip install pandas -U

To remove a package
pip remove pandas

To print all installed packages
pip freeze

3 – Virtual environments

Virtual environments are best practice for managing Python on your machine. Ideally you should have one virtual environment for each project you work on.

This gives you the ability to work with different versions of packages in different projects and to understand the package dependencies of your project.

There are two main packages for managing virtual environments. Personally I use conda (as I always use the Anaconda distribution of Python).

One cool trick is that once you activate your environment, you can start programs such as Atom or Jupyter and they will use your environment.

For example if you use a terminal plugin within Atom, starting Atom this way will mean the terminal uses your environment Python – not your system Python.

4 – Running Python scripts interactively

Running a script interactively can be very useful when you are learning Python – both for debugging and getting and understanding of what is going on!
cd folder_where_script_lives
python -i script_name.py

After the script has run you will be left with an interactive console. If Python encounters an error in the script then you will still end up in interactive mode (at the point where the script broke).

5 – enumerate

Often you want to loop over a list and keep information about the index of the current item in the list.

This can naively be done by

idx = 0
for item in a_list:
other_list[idx] = item
idx += idx

Python offers a cleaner way to implement this

for idx, item in enumerate(a_list):
other_list[idx] = item

We can also start the index at a value other than zero

for idx, item in enumerate(a_list, 2):
other_list[idx] = item

6 – zip

Often we want to iterate over two lists together. A naive approach would be to

7 – List comprehensions

List comprehensions are baffling at first. They offer a much cleaner way to implement list creation.

A naive approach to making a list would be

new_list = []
for item in old_list:
new_list.append(2 * item)

List comprehensions offers a way to do this in a single line

new_list = [item * 2 for item in old_list]

You can also create other iterables such as tuples or dictionaries using similar notation.

8 – Default values for functions

Often we create a function with inputs that only need to be changed rarely. We can set a default value for a function by

def my_function(input_1, input_2=10):
return input_1 * input_2

We can run this function using

result = my_function(input_1=5)

Which will return result = 50.

If we wanted to change the value of the second input we could

result_2 = my_function(input_1=5, input_2=5)

Which will return result = 25.

9 – git

Git is a fantastic tool that I highly recommend using. As with Python I’m no expert! A full write up of how to use git is outside the scope of this article – these commands are useful to get started. Note that all of these commands should be entered in a Terminal that is inside the git repo.