The Living Smart Grid

guest post by Todd McKissick

The last few years, in energy circles, people have begun the public promotion of what they call the smart grid. This is touted as providing better control, prediction and utilization of our nation’s electrical grid system. However, it doesn’t provide anyone except the utilities more benefits. It’s expected to cost much more and to actually take away some of the convenience of having all the power you want, when you want it.

We can do better.

Let’s investigate the benefits of some changes to their so called smart grid. If implemented, these changes will allow instant indirect control and balance of all local grid sections while automatically keeping supply in check with demand. It can drastically cut the baseload utilization of existing transmission lines. It can provide early benefits from running it in pseudo parallel mode with no changes at all by simply publishing customer specific real-time prices. Once that gains some traction, a full implementation only requires adding smart meters to make it work. Both of these stages can be adopted at any rate and benefits only as much as it is adopted. Since it allows Demand Reduction (DR) and Distributed Generation (DG) from any small source to compete in price fairly with the big boys, it encourages tremendous competition between both generators and consumers.

To initiate this process, the real-time price must be determined for each customer. This is easily done at the utility by breaking down their costs and overhead into three categories. First, generation is monitored at its location. Second, transmission is monitored for its contribution. Both of these are being done already, so nothing new yet. Third, distribution needs to be monitored at all the nodes and end points in the customer’s last leg of the chain. Much of this is done and the rest is being done or planned through various smart meter movements. Once all three of these prices are broken down, they can be applied to the various groups of customers and feeder segments. This yields a total price to each customer that varies in real time with all the dynamics built in. By simply publishing that price online, it signals the supply/demand imbalance that applies to them.

This is where the self correction aspect of the system comes into play. If a transmission line goes down, the affected customers’ price will instantly spike, immediately causing loads to drop offline and storage systems and generation systems to boost their output. This is purely price driven so no hard controls are sent to the customer equipment to make this happen. Should a specific load be set to critical use, like a lifeline system for a person or business, they have less risk of losing power completely but will pay an increased amount for the duration of the event. Even transmission rerouting decisions can be based on the price, allowing neighboring local grids to export their excess to aid a nearby shortfall. Should an area find its price trending higher or lower over time, the economics will easily point to whatever and wherever something is needed to be added to the system. This makes forecasting the need for new equipment easier at both the utility and the customer level.

If CO2 or some other emission charge was created, it can quickly be added to the cost of individual generators, allowing the rest of the system to re-balance around it automatically.

Once the price is published, people will begin tracking their home and heavy loading appliances to calculate their exact electrical bill. When they learn they can adapt usage profiles and save money, they will create systems to automatically do so. This will lead to intelligent and power saving appliances, a new generation of smart thermostats, short cycling algorithms in HVAC and even more home automation. The result of these operations is to balance demand to supply.

When this process begins, the financial incentive becomes real for the customer, attracting them to request live billing. This can happen as small as one customer at a time for anyone with a smart meter installed. Both customer and utility benefit from their switchover.

A truly intelligent system like this eliminates the necessity of full grid replacement that some people are proposing. Instead, it focuses on making the existing one more stable. Incrementally and in proportion to adoption, the grid stability and redundancy will naturally increase without further cost. The appliance manufacturers already have many load predictive products waiting for the market to call for them so the cost to advance this whole system is fully redundant with the cost of replacement meters which is already happening or planned soon. We need to ensure that the new meters have live rate capability.

This is the single biggest solution to our energy crisis. It will standardize grid interconnection which will entice distributed generation (DG). As it stands now, most utilities view DG in a negative light with regards to grid stability. Many issues such as voltage, frequency and phase regulation are often topics they cite. In reality, however, the current inverter standards ensure that output is appropriately synchronized. The same applies to power factor issues. While reducing power sent via the grid directly reduces the load, it’s only half of the picture.

DG with storage and vehicle-to-grid hybrids both give the customer an opportunity to save up their excess and sell it to the grid when it earns the most. By giving them the live prices, they will also be encouraged to grow their market. It is an obvious outgrowth for them to buy and store power from the grid in the middle of the night and sell it back for a profit during afternoon peaks. In fact this is already happening in some markets.

Demand reduction (DR), or load shedding, acts the same as onsite generation in that it reduces the power sent via the grid. It also acts similar to storage in that it can time shift loads to cheaper rate periods. To best take advantage of this, people will utilize increasingly better algorithms for price prediction. The net effect is thousands of individuals competing on prediction techniques to flatten out the peaks into the valleys of the grid’s daily profile. This competition will be in direct proportion to the local grid instability in a given area.

From material utilization perspectives significant hardware is manufactured and installed for this infrastructure often to be used at less than 20-40% of its operational capacity for most of its lifetime. These inefficiencies lead engineers to require additional grid support and conventional generation capacity additions when renewable technologies (such as solar and wind) and electric vehicles are to be added to the utility demand/supply mix. Using actual data from the PJM [PJM 2009] the work shows that consumer load management, real time price signals, sensors and intelligent demand/supply control offer a compelling path forward to increase the efficient utilization and carbon footprint reduction of the world’s grids. Underutilization factors from many distribution companies indicate that distribution feeders are often operated at only 70-80% of their peak capacity for a few hours per year, and on average are loaded to less than 30-40% of their capability.

At this time the utilities are limiting adoption rates to a couple percent. A well known standardization could replace that with a call for much more. Instead of discouraging participation, it will encourage innovation and enhance forecasting and do so without giving away control over how we wish to use our power. Best of all, it is paid for by upgrades that are already being planned. How's that for a low cost SMART solution?

Post navigation

17 Responses to The Living Smart Grid

I don’t know much about this stuff so I can’t evaluate Todd’s proposal, but it seemed worth discussing so I invited him to post it here. I hope some experts start discussing this, so the rest of us can learn something!

You make it sound so easy, but there are many aspects in your proposal that need careful thinking.

Realtime prices based on realtime measurements… did you know you can’t always trust a direct measurement? It can take several months for all parties involved (generators, transmission operators, distributors) to agree on final measurements for settlement purposes. Sure this can be improved, but it won’t be easy nor cheap.

Different realtime individual prices based on where you live? Not a very fair system for charging customers. Those lucky to live near a well interconnected distribution/transport net will have much lower prices than those who don’t. The solution, simple, build more lines, but who will pay for them? The unfortunate customers that happen to live there? Bringing individual costs down to individual users may seem like a good idea but it can’t be done easily (or fairly) if you start to think about it.

Price spikes can happen from one hour to the next (or for one minute to the next for that matter). Noone will be watching their meters 24/7. In the end you will have a gadget at home with a price cap and once that is reached your home will be disconnected (if you are willing to pay whatever price, there is no point in all the proposed changes). So basically you are taking part on a load shedding scheme in which you set a price limit for disconnection instead of the transport/distribution using a load mechanism. I’m not convinced this is much better solution.

I haven’t done any numbers, so actually your proposal could be better than the curren smart-grids proposals, I was just concerned about some of the ideas you presented, and how easy you made it look.

It actually is just that easy, but it’s not an instant switchover process. Implementation in any given area would go through stages. The price needs to be set up, then published, then customers need to see the trends and install systems that can benefit them and then finally they need to acquire an appropriate smart meter and sign up. Each customer would make this decision on their own schedule, so adoption rates would likely follow the traditional bell curve which could take months to years to complete, depending on many factors.

The beauty of this system is coincident with the reason your suggested problems wouldn’t be quite as bad as you make them sound. By aggregating many variable loads, generators and storage systems together all synchronized to a single price, you dramatically increase the resolution of any ‘fixes’ needed for a given event. Instead of price swings of 20 cents (or dollars) by the minute, it will most likely settle down to swings of tenths/hundredths of a cent to cause minor load changes.

Only when major outages occur (that would have taken down the local grid today) will the price spike high enough to tell appliances to disconnect. It would be less than productive to have an entire house switched on or off based on this price so most people wouldn’t sign up until they had some large load or a generation system that could follow it.

This slow, but growing user base would allow the utility plenty of time to dial their measurement-to-price process in. In the same way that localized adoption slowly increases to large numbers of customers using the price, the utilities have similarly large room for errors in the beginning. As both sides grow, they will increase their accuracy at the same rate as their impact.

Regarding the fairness of prices in different resource heavy locales, a system like this could actually make it more fair. If Bob’s area had no resources while Jim’s neighboring area was flush with excess, the connecting transmission lines would become the most economic path to get energy to Bob. This is where he gets his power now, except that he pays a transmission fee on top of a flat market price. In the proposed system, Jim’s local prices (for just the generation portion) would be very low and the under-utilized transmission lines wouldn’t add as much to it. This means Bob’s aggregated price wouldn’t be that much higher than Jim’s but he now can benefit more from load shedding/shifting. This also provides a larger market for DG in Jim’s market, further reducing price, supply and the ultimate price to Bob.

“One important question is whether consumers will act in response to market signals. In the UK, where consumers have had a choice of supply company from which to purchase electricity since 1998, almost half have stayed with their existing supplier, despite the fact that there are significant differences in the prices offered by a given electricity supplier. Where consumers switch an estimated 27-38% of consumers are worse off as a result.”

Please, I have enough things to worry to start worrying about the price of electricity when I plug something. I already hate plane tickets because their price is not constant, as train or bus.

Joan, You nailed the problem perfectly in your last line. People don’t want hassles and they don’t want to switch to something that could just as easily be some marketing gimick that ends up costing them more.

The whole idea behind this is to make it a one-time choice and offer less hassle and proven savings should they choose to sign up. As our little group figured out how it would work, we kept these things in mind. Initially, there would be two likely reasons for someone to do so.

The first is if they were about to make a large appliance purchase, say a refrigerator, clothes dryer or home HVAC equipment. Given the choice of more of the same vs. a highly efficient one, many people will choose the latter. If that option allowed them to autonomously train said appliance’s habits to price for greater savings, more savings would be on the table.

If a family was about to purchase a renewable energy system and presented with the same options, again there would be extra savings to be gained.

The idea is that with nothing more than a WiFi chip ($1 in bulk now) and being programmed with the local online price address, the appliance or distributed generation system or both could self optimize based on user habits and price trends.

An example of this would be if an air conditioner had three settings – economy, mix and performance. When in mix mode, it would short cycle or just not run very much during the daily peak price periods. This means it would have to make up afterwards and probably even predict this time by extra cooling. In economy, it would be even more stingy. In performance, it would operate more on demand with less importance given to price.

As you can see, the hard work is all done by the algorithm programmed into the appliances. From there, numerous home automation capabilities become nothing more than a little more code.

At the heart of this is a systems control problem. As you’ve mentioned in the last line Todd, it’s easily possible to install an automatic algorithm on a device these days. What I don’t know is whether the state of the art in control algorithms is robustly keep things stable, particularly if there are other unknown-to-it algorithms out there.

An over-simplified example would be an oversubscribed coal-fired power station: it starts to make power expensive, then huge numbers of clients turn themselves off, but there’s a start-up/shutdown time to the coal-fired power station so it’s going to take, say, 30 min to shut itself off. So it makes the cost attractively low to get people to make use of the generated power during the shutdown phase. Suddenly lots of agents think it’s time to start using power again. In an ideal scenario new users “sense” to stop just when it’s at standard operating capacity; it’s easy to imagine enough of a lag/misunderstood price signal so that it gets oversubscribed and the cycle begins again.

This is oversimplified, but my vague sense of the discipline is that stability is still tricky in an “open world” where there are competing algorithms.

Thanks Dave, As before, your last line identifies the solution to the problem you mention. In a world of open algorithms, the response time and feedback quantity will vary tremendously on the part of the customers. This provides the needed dampening effect to smooth things out in both directions. Add to this the potential for higher resolution feedback so predictions can be better tuned.

In my former automation life, I used to tie feedbacks to very slow processes like a water treatment tank filling up and the chemical makeup reaching some limit. This is done with a PID loop using Proportional, Integral and Derivative feedbacks which all happen incredibly fast. The normal result was huge overshoots as you describe, but after tweaking those individual factors, I found that you can tune just about any loop for both speed and magnitude. I have to assume that since this would maximize customer profit (when also considering startup/shutdown costs), this is where the algorithms will migrate toward. After all, what manufacturer would make their appliance wear out prematurely due to rapid cycling if it’s under warranty?

The first is the control of the grid’s balance (supply = demand). That one is controlled by a single factor – price. The current method has no control over demand and forces supply to always match a wildly variable demand. Any level of feedback can only improve that scenario, not worsen it and certainly, a wide range of such feedbacks will stabilize it tremendously. The only factor not accounted for is the speed and resolution of this loop. Given that both are interdependent on each other with faster being better, there’s only the speed of the customers’ appliances hindering this process.

The second form of control is that appliance (or DG/storage system) responding to the published price. At this level, better response results in more savings to the customer, but also better prediction algorithms do too.

It’s this prediction capacity which could cause overshoot or over-dampening. In magnitude, however, this will be very small compared to the radical swings of today. Even so, with this also reflected in price, there’s yet another factor for the appliances to evaluate in their analysis. The competition model suggests that if there was money to be gained from incorporating this small extra code, it would become ubiquitous.

About 5 years ago, I built a spreadsheet that simulates all players in this game. It had blocks for loads, generators and storage as well as transmission and large scale generation (no local distribution costs). The hardest part was setting up randomized patterns of user load profiles. What I found was that as the number of customers increased, the effect of any major change became increasingly insignificant and the price and grid load stayed quite smooth.

“ETSO underlines that TSOs are responsible to define and procure the adequate level of reserves able to face given technical criteria (such as maximum risk accepted to face sudden loss of infeed), agreed by the stakeholders and governments. On the contrary, there exists a risk associated to insufficient position hedging by BRPs, which can for example result from serious under-evaluation of demand to serve within their portfolio, or from strategic behaviours consisting in taking positions on some high price markets while being short on others. This kind of behaviour can request very important and unpredictable reserve levels, which TSOs can not permanently procure. ”

and about some approaches to imbalance settlements:

“ETSO believes that categorical use of this approach could encourage gaming and provide disincentives to minimize imbalances, and thus, requests more analysis to assess its properties.”

I don’t know any of the rules applicable to Europe but I can say one thing. Their concerns are that there is enough spare capacity available after considering the financial gaming that results from hedging activities.

Related to that, this system would be resilient to such problems for two reasons. There are no financial levels riding over top of the price to be gamed. The only way to manipulate the price is to actually buy or sell power or to change your demand and those activities are no different from actual users so you would always pay (or gain) exactly what you’re using.

The other reason is that this layout acts to smooth or completely flatten the hourly swing in grid load. How much this happens depends on the aggregate capabilities of everyone involved but as that user base increases, the daily profile will settle to a flat line. Even major changes, such as a large generator or transmission line failure, would have less and less impact.

Given these two factors, planning for both emergencies and future expansion becomes easier to predict. Gone would be the days when a spare 20-40% capacity just sits there waiting for a once-in-a-decade event.

Thanks for a well-written post that takes a skeptical look at the hyped smart grid area.

I’ve written my thesis about smart grid implementation in Sweden. The grid here is for example more stable and have more capacity, which creates less incentives for a SG, compared to the USA. But we’re in the SG-hype in some sense since, for example, we’re in the process of implementing hour-based prices around 2014-15 for small customers, with the help from smart metering. An issue I’ve come across is the fact that much automation of heating for example, to shift it to a time with a low-price, would create an issue with “returning load”. For example all the radiators run during the night, but that will only result in a small difference in price between night and day. And when all the automated load turns on and off at the same time even the Swedish grid will run into problems capacity-wise at those times.

Todd’s article is excellent, and I strongly recommend it. But its first paragraph has a tone that’s quite at odds with the meat and potatoes that follow:

This is touted as providing better control, prediction and utilization of our nation’s electrical grid system. However, it doesn’t provide anyone except the utilities more benefits. It’s expected to cost much more and to actually take away some of the convenience of having all the power you want, when you want it.

Cost more to whom? It’s certainly possible that utilities could use their political clout to game the system (though I think here in California they’ll get no more than minor advantages). But in the mid and long term, building less and encouraging distributed production — which is already beginning in California — will produce substantial savings and avoided costs.

The biggest benefit as I see it, though, is as part of a process of making everything about power more transparent, with fewer ‘externalized’ costs — real costs which those who use and those who profit avoid far, far too much in our current economy. Want convenience — a 70 degree house on a 110 degree day? Pay for it. Want to dump carbon, or particulates or a hundred other pollutants into the air we all breathe? Have it publicly accounted, and pay for it.﻿

To clarify, Bill, I’ll offer the two cases for comparison. My apologies for not making it clearer.

If we do nothing and allow the currently planned system to move forward, this is what is expected to result.

The utility companies will increase their rates to cover massive transmission line upgrades. They will sell regulators on the need for their ‘smart grid’ meters and further raise rates to cover that cost. The current spare capacity requirements will remain in place and be applied to much higher grid usage which will result in even more wasted spare capacity. To combat this while still not being required to change it, they will argue for more control via their smart meters. This has already begun in the form of agreements with end-user customers that allow high demand periods to give shut-down control of targeted appliances to the utility companies.

The sum benefits for the customer is increased rates, decreased convenience and a false promise of easier/cheaper integration of renewables.

If, instead, we move toward the proposed plan, the following benefits could easily result.

Immediate usage knowledge will be available to the people, creating a new awareness of the situation. Rules allowing fair participation via load shedding, onsite generation and storage buffering will need to be addressed as a result. This is a discussion that’s been needed for more than a decade. On that step’s completion, the market will offer tools ranging from benign to powerful as demand requests them. Those tools, consisting of spreadsheets, power displays, controller boxes and eventually to integrated features on new appliances, create the opportunity to switch to the new billing system and save money for the customer.

As they migrate over, the grid’s hourly swing smooths out, lowing the necessary spare capacity requirement of related transmission lines. As this becomes officially recognized, the mandated requirements can be proportionally relaxed. If this happens before the current requirements mandate new lines, those new lines can be avoided. The utility company will have less capital to manage and charge overhead on. In addition to reduced transmission lines, they will have less generation spare capacity needs (types like ‘spinning reserve’ – which is pre-warmed up generators just waiting to be connected). This will save them money and burn less fuel, but not help their bottom line. By crying wolf on this last issue, they have massaged the system so that even the cheap power gets paid on margins, so it’s in their interest to keep this a problem issue. The last thing they want is direct competition by home generation on their scale. This is why we will never hear them admit that renewables have any serious capacity.

How To Write Math Here:

You need the word 'latex' right after the first dollar sign, and it needs a space after it. Double dollar signs don't work, and other limitations apply, some described here. You can't preview comments here, but I'm happy to fix errors.