Grid Stability and Renewable Power

Thanks. The above document seems to suggest that it's only replacing one standard (BAL-003-xxx) with a new and improved version (BAL-004-0). Without reading the standard, I can't tell if TEC (time error correction) is still being performed using a new and improved TEC procedure or standard. I said "seems to suggest" because I'm having difficulties decoding the legalese.

Quoting: NERC explains that since Reliability Standard BAL-004-0 became effective, improvements have been made to mandatory Reliability Standards (such as the development of Reliability Standards BAL-003-1.1 and BAL-001-2 and the Interconnection Reliability Operations and Coordination (IRO) Standards) that help ensure continued adherence to frequency approximating 60 Hertz over long-term averages and make Reliability Standard BAL-004-0 redundant.

Note that "continued adherence to frequency..." indicates that something is being done to maintain a "long-term average" 60Hz. If synchronous clocks were not an issue, there would be no need for this "long-term average". Also, just because it is no longer a regulatory requirement to maintain clock sync doesn't prevent the utilities from doing it anyway possibly because no utility wants to be first to be identified as causing a problem with some forgotten device due to an oversight. Also note that compliance to this reliability standard is voluntary, not mandatory:

formatting link
"Time Monitor is a voluntary service and, therefore, should not be penalized for non compliance." Note that this was done in 2008.

Reply to
Jeff Liebermann
Loading thread data ...

snipped-for-privacy@gmail.com wrote: ==================

** That is not what really happens.

The supply frequency is in constant, slow oscillation around the nominal 50 or 60Hz. Anyone with a period counter or a scope in X-Y mode can verify this.

Excursions are limited to about +/- 0.1 Hz and take several minutes per cycle. There are web pages that show this in real time too.

In any case, the millions of mostly electronic clocks that rely on this frequency cannot be trusted due to the high probability of local outages and tripping circuit breakers.

...... Phil

Reply to
Phil Allison

So, mismatches in generation and load do not result in a change in frequency? That's an interesting idea. So where does rotational inertia come in?

Reply to
Ricky

But they are critical to understanding one of the key modes of failure that took down so much of the UK grid. Roughly 2% of roofs have solar panels on each producing 4kW in good sunlight and at a time when average household load is about 200W so per thousand homes you have:

200kW load and 80kW local solar PV. The system tried to stabilise itself by shedding 1MW of load but at the same time it lost 400kW of local generation as well and so had to keep on dropping chunks off supply. It was always behind the curve at every step of the way. The algorithm expected to overshoot and then be able to reconnect. It didn't happen.

As the number of homes with solar PV increases it becomes harder and harder to ignore this effect at >5% they become net exporters at least when the sun is shining.

There is some rotating inertia in the spinning wind turbine blades but nothing like as much as there is in a big mechanical steam turbine but enough to keep going provided that you allow the frequency to drift.

One way to build some resilience is to have local battery storage that is immediately available to boost output when there is a sudden change in load. US & Australia has a fair sized one to control peak loading.

formatting link
UK has one but it is a complete toy and wasn't in the right place to do any good last time. UK has a structural problem in that most power is generated in the north and shipped down to the south to be used. The upshot of this if they lose either of the big N-S EHT supergrid lines then the south is very short of electricity and something has to give.

Pumped storage reservoirs are our most effective load balancing tool for immediate generation of more power. Routine balancing is done by adjusting power delivered the ultimate sink loads (on very favourable intermittent tariffs). Unfortunately if you have already asked them to power down you don't have that option (as has occurred some winters).

You can still simulate inertia by allowing the inverter to drift further off frequency than the standard rules would normally allow. Something like this tweak has been done to avoid quite so much chaos next time.

I presume that they have fixed the assumptions that caused the load shedding algorithms to misjudge how much *absolute* load they would have to drop to obtain a net saving of 1MW in future. It was a pretty catastrophic mode of cascade systems failure for what should have been a routine lightning strike with local cutout protection and recovery.

I don't think it would be such a problem in the USA since peak solar PV output and peak domestic aircon requirements more or less balance it out. In the UK there is hardly any domestic aircon so that in sunny weather most of what is generated by domestic PV is exported to the grid (especially in the late afternoon).

A stupid feature of the UK's "green" feed in tariff makes it cost effective to have solar PV power and turn it into domestic hot water! After market gizmos abound to do this automatically. You are deemed to export half of what you generate irrespective of using it or not.

Reply to
Martin Brown

In general it often tends to run consistently slow when the loads are at highest peak (like evening meal time in the UK) or peak afternoon aircon load in the USA and consistently fast in the middle of the night.

The latter used to be a nuisance for some old school telescope drives that were mains synchronous motor based. When you are tracking to arc second precision the mains just isn't accurate enough. They moved to quartz crystal references or servo with autoguider pretty much as soon as the technology became available. Much less work for the observer.

Quite a lot of traffic lights still rely on it too.

The advent of cheap VLF time modules has made it much less of a problem. Mains powered synchronous motor based kit has all but died out now. But there are plenty of legacy traffic lights that need resetting after a long powercut (since they tend to resume from whatever time they were at when the power goes down). A few minutes is fairly harmless but a few hours and the rush hour traffic flows end up in total chaos.

That isn't what he said.

They have always allowed the mains frequency to drift slightly with time to accommodate minor imbalances in the load at peak times. Heavy load means lower frequency and lighter loads the allow it to run a bit fast. It hunts slowly around the nominal frequency since if they predict that load will increase they will bring more generation onstream.

To keep dead reckoning mains powered clocks based on synchronous motors accurate they increase the frequency slightly when the loads are lowest in the middle of the night. The average mains frequency over 24 hours is held to very high precision linked back to atomic time standards. (from what others have said it seems the USA have relaxed this rule)

Network phase in the UK is relatively well defined since 800km << 6000km (one wavelength at 50Hz).

But in the USA where network distances are much greater the network phase must be locally determined. eg. SF to NY is ~4000km which is a very non-negligible fraction of a 60hz wavelength of 5000km.

Reply to
Martin Brown

In the early-mid 70's, I wanted a clock that wouldn't have to deal with the silly time changes so designed one around a 10MHz TCXO that we happened to use in one of our products. I was chagrined to discover it was off several seconds each month -- until I looked at the tolerance on the oscillator (a few PPM).

As I couldn't step *up* to an OCXO (too much power required to keep it operating through power outages), I looked to the mains as an alternate time source ("how can *regular* clocks keep such good time?"). Eventually, replacing all of the discrete logic with a processor I could "watch" the mains frequency vary (against the stable TCXO) over the course of a day. I.e., the mains-derived time had short-term stability problems but long-term accuracy.

So, I fell upon the idea of using the mains frequency to tweek the TCXO's notion of time. Then, realized the TCXO was essentially unnecessary; any time source of sufficient short term stability could suffice -- if I could *measure* that frequency against the mains over VERY long intervals.

Subsequent clocks have been built around cheap watch crystals/RTCs and keep remarkably good time (much better than an undisciplined PC). I now discipline my NTP server with such a source (I don't care if my PCs are "off" by N seconds -- as long as they are ALWAYS off by N seconds -- as this lets me operate without a GPS signal *or* routing them!)

I'd been told that was *supposed* to happen. But, haven't seen any practical consequences on any of my mains-disciplined clocks.

There is a group that (informally?) monitors this nationwide, here (US):

formatting link
I question how reliable their phase measurements are, given that they are looking at "consumer" distributions and not the "backbone" of the power network. (but, *frequency* observations should be dead to nuts)

Reply to
Don Y

Maybe; the 'drift slightly with time' sounds vague.

Not so. Rotating machinery generates that frequency, and heavy load on the generators does slow their rotation; regardless of control settings, the feedback gain cannot be infinite (that causes more problems than it solves).

Reply to
whit3rd

On the European mains network, the proportionality constant is about 20GW/Hz. All sufficiently large generating facilities are supposed to adjust their power output to drive the long-term average frequency to its nominal value. A Swiss source provides the reference frequency.

Jeroen Belleman

Reply to
Jeroen Belleman

I don't care about the issues in the UK. You and others have stated many times that the UK grid is bollixed up beyond all redemption. So not much to learn from it for a real grid.

Yes, yes, yes, but not what we are discussing.

Yes, you keep telling us how bad the UK grid is. I agree.

Reply to
Ricky

No, that's not quite right. It is the balance between load and supply that causes the frequency drift. Too much energy feeding into the generators and the frequency increases as the excess is absorbed by the rotational inertia. Too little energy feeding into the generators and the frequency decreases as the inertia is drawn down to supply the excess load. The difference in source and load match determines the derivative of the frequency.

Reply to
Ricky

Right. In single generator operation the governor will be set to constant frequency mode, and will hold a constant frequency at all loads. But in order to parallel generators and share the load in a stable manner the governors must be placed in droop mode, where a percentage of power level is subtracted from the governors set point. When paralleled with a much larger bus, the governor cannot regulate generator speed, that is locked to the grid, it regulates real power output instead. Connect initially with no load and slightly higher speed at matched phase and the generator picks up a minute load, then if droop is set at 5% for instance cranking up the governor speed knob 5% will give you full load. So all rotating generators on the grid provide more power as frequency drops and less as it rises, more power delivered tends to up frequency, less reduces. Power delivery is set by the central grid operator for all large generating stations in order to match the load and thus keep frequency ~constant.

There is a similar situation with voltage regulation, in single generator operation the field exciter regulates voltage. When paralleled with an "infinite bus" the exciter can no longer regulate voltage, it automatically (via aux contacts on the paralleling breaker) goes into reactive power regulating mode, initially set at zero reactive power if voltages were matched when paralleling and left there unless the utility pays for reactive power. Multiple generators at the same location are cross compensated to share reactive power, and reactive power on the grid is balanced by the grid operator. There are some interesting stability issues with reactive power since it has twice the effect on rotating generator voltage as real power, but this is well understood if not easy to remedy - as static synchronous generators increase in % total grid power and need to contribute reactive power it will be necessary to turn some control of them over to the grid operator, as is already being done in some places, and as was done long ago to stop power sloshing between rotating generator power plants on the same grid.

(Grid goes undervoltage, SSG's up reactive power, grid voltage goes up, SSG's drop reactive power, grid goes undervoltage, etc. Better control strategy will stop this.)

Reply to
Glen Walpert

The part I'm not clear on is how this is combined with the financial side of things. There are various load accounts and what you describe is generation essentially in bulk. No generator is outputing a specific amount as contracted for by customers. So how does the billing work? If user A, B and C have contracted with supplier K, but supplier K is having to output power according to central control, who is paying supplier K for all the MWh being pumped out if it doesn't match what his customers A, B and C are asking for?

Reply to
Ricky

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.