Is a Gray code counter more energy efficient?

In binary ripple counters, many bits change on a clock tick (lowest bit changes every time, bit#2 changes half the time, bit #3 changes every fourth clock...) so there's a log(N) scaling for N-bit counters' average bit-change cost.

Capacitive energy loss thus favors a Gray code for counting with minimal energy cost, (1) being the cost of each tick incrementing the counter. There's some overhead, though, because the determination of the 'next' Gray code transition requires a hidden internal logic behind the displayed bits.

What is the Gray counter scaling on transitions including the hidden logic as well as the output bits?

Reply to
whit3rd
Loading thread data ...

The toggle rate ceiling in a binary counter is 2 per clock cycle. It's the series, 1 + 1/2 + 1/4 + ... asymptotically approaching 2.

Not sure what you mean by the "hidden" logic. The logic for calculating each bit in a Gray counter is a bit more complex than a binary counter.

If you really mean a ripple counter, that has virtually no logic. The FFs in a ripple counter simply are clocked by the output of the next lower bit with the Q- output fed to the D input (essentially a toggle FF).

The power "cost" of the logic depends on the implementation. In a gate level design, the Gray counter will be significantly more expensive with the greater number of gates. Implemented in an FPGA, with pass transistors as gates, the power issue of the logic is not so large. They also have built in logic for the binary counter, which is rather low power.

Reply to
Ricky

I had to consult the web on Gray code to remember the sequence. The logic involved uses something similar to a carry chain, but in the opposite direction. The value of the lsb is simply the parity of the other bits, delayed by one clock cycle of course. Then each bit retains its value until all bits from the msb to this position parity is even AND the bits to the right are 1 and 0s in all positions further to the right, if any. So...

0000 >>> 0001 0001 >>> 0010 0011 >>> 0010 0010 >>> 0110 0110 >>> 0111 ... 0100 >>> 1100 etc.

So you need a chain calculating parity from the left to the right, and you need a chain flagging all zeros from the right, ending with a one to the left (xxx1000 for example).

This does make the logic more complex for the Gray code and likely will use more power in most technologies. But, if the counter value is driven on I/O pins, the power of driving the output lines likely exceeds the power used internally to calculate the Gray code and the Gray code is likely more energy efficient than the binary code. I can't say much about a ripple counter since that is typically done in an SSI/MSI chip.

Reply to
Ricky

Yeah, I goofed; was thinking of 1/N sequence instead of 1/(2^N). Internal decision-making for Gray count seems complex enough to swamp any savings on the outputs, and pseudorandom sequences with shift registers can be expected (because the internal logic is so simple) to do better on silicon, simply because CCD-like chains can implement the shift with near-zero energy input.

One would get the clock division-by-(2^M) from ripple counters, or synchronous, but division-by-((2^M)-1) is the divisor for a maximal pseudorandom sequence. There'd be an M-wide gate to detect the count, but only the one gate required. Clean output requires a clocked latch on that gate, lest false triggers happen.

Generating a one-Hz second hand drive would work OK from a 32727 Hz quartz crystal with pseudorandom division, rather than the more common 32728 Hz value.

Reply to
whit3rd

I'm not sure what you want to do but the balanced Gray code might be more suited to your hardware approach for certain choices of length.

formatting link
They are widely used on precision absolute angular position sensors on telescopes.

Not sure why you want to do that. It would be ~30ppm off which is more than the typical tolerance of the crystal. You might be able to slug it with some extra capacitance load but 2^N divisor hardware is so cheap!

Things I have built with PICs using watch crystals last for a couple of years on a pair of AA cells - that is good enough for me.

Reply to
Martin Brown

This is why I mentioned the issue of driving outputs, compared to strictly internal logic. If you are trying to design your own clock drive, I don't think you will do better on power than using a standard CMOS clock chip. In any event, the focus should be on the oscillator, rather than the divider.

Perhaps we have reached the X/Y point. You asked about X, but what is Y? What are you really trying to do?

Reply to
Ricky

I was advised that one high-end watch uses an electrostatic motor to drive the (sweep?) second hand, but an electromagnetic (solenoid/ratchet?) drive for minutes. The claim was lower energy use, allowing energy-harvesting to suffice. I think it's this one:

formatting link

... and it got me thinking about the wasteful high-current-for-fast-slew clocking requirements of a flipflop.

Reply to
whit3rd

32768 Hz is not "fast" in any sense. The energy used in digital logic is due to charging and discharging the capacitance of the circuit elements.

P = 1/2 C F V^2, where P is power, C is capacitance, F is frequency and V is voltage.

This is why driving the I/O pins is very significant, the capacitance is very high compared to internal nodes. The voltage is significant, but can only be lowered so much. In FPGAs, this circuitry uses a tiny, tiny amount of power, until you get to an I/O pin.

Do you have any idea how much power is used in the digital circuitry compared to the actual clock drive or the oscillator?

The minute hand is driven by a 60:1 gear reduction, so uses 60 times less power than the second hand. Most digital clocks with analog readout run for a year or two on a single AA battery. What is the power requirement on the accutron-dna clock?

As an aside, I have one such analog readout clock. It would stop running when the battery could no longer provide enough force to turn the second hand past the 40 second point. I realized the second hand needed a counter weight and added one. It runs far longer on the single AA battery now. I usually use an old battery from a mouse. No point in using a fresh battery when there are lots of batteries with some 20% or 30% of their power remaining.

Reply to
Ricky

For that you don't need a synchronous counter, a ripple counter will do, where only the first flipflop is clocked at the full rate, and will use much less power than a 15-bit shift register where every stage is clocked at the input clock rate.

Reply to
Chris Jones

That power is useful, it changes the logic state. Part of the problem of a flip-flop is the necessary slew rate of the clock, and the I^2 * R heat in channel resistance is a waste, which lower current (and slower slew rates) would minimize. CCD style shift registers suffice for some kinds of counter, and work with sinewave drive devoid of those edges associated with high current.

Some drivers (like the ones that drive LCD displays) have less drive, which works well with lower gate capacitance. Electrostatic motors are probably a target that can benefit from such. For stingiest power use, too, 'clocked logic' was the rule, before ubiquitous CMOS. CCDs are a kind of clocked logic.

Not sure; the info I have on that Accutron DNA watch is rather light.

Reply to
whit3rd

The trick in coding a precision timepiece from a low speed clock is to be able to divide by 32768 +/-a few so that the crystal doesn't have to be trimmed at all. You merely have the thing self calibrate its divisor off an external precision 1s reference pulse.

That looks to me like 100% marketing BS. Anything that moves quickly will have air resistance and bearing friction to contend with.

Electrostatic powered devices are easy enough I have a just post WWII Zamboni pile from a night vision sight still going - it will move aluminium foil about until it fails by work hardening.

The world's longest ringing electric Bell in Oxford has been at it continuously since 1840 (also thought to be a Zamboni pile).

formatting link

Reply to
Martin Brown

Not sure what you are talking about "useful" power. Of course it's useful. Logic that never changes state is called a rock.

The slew rate is not relevant to power dissipation. Check the formula. It says nothing about the channel resistance, which, with the capacitance sets the slew rate. Unlike in a DC circuit, the power is not related to the resistance.

And yet, they still consume the same power. Look at the formula. That's why I presented it.

How would you use "CCD logic"??? Is this an imaginary design, a thought experiment?

So, you may be trying to optimize a part of the design that is already very low power compared to the rest of the design?

Reply to
Ricky

The "trick"? Adjusting the divisor is a common "trick", but with poor results. 1 part in 32,768 is about 30 ppm which gives an error of 2.6 secs per day and 1.3 minutes per month. Most consider this not acceptable. Digital trimming requires a different circuit than a simple divider, something that will use significantly more power, like an NCO.

Air resistance??? You are making an assumption of relative power losses.

A bell that tolls for none.

Reply to
Ricky

C F V^2. Both edges burn power.

Reply to
John Larkin

A more pragmatic question might be if you have a Boolean logic circuit with more outputs than inputs but you're free to choose an arbitrary encoding for the inputs, in ROM say, what encoding minimizes the amount of decode logic you need.

Like BSAT I expect it's an NP-complete problem; though for my own particular work with SPLCs it seems like Gray code tends to result in fewer decode gates.

Though IDK if that's just particular to the problems I've picked or if there's some property of Gray codes, where you could say the set of arbitrary outputs where a Gray code produces a more efficient decode structure, is somehow strictly larger than where a binary encoding is more optimal.

Reply to
bitrex

Two other fine points: C for a gate is proportional to its output current limit, so less current means you can design low-C transistors that burn less power. Clocking a flipflop (ripple or synchronous counter) has a strict dV/dt requirement (CD4013 max rise time 10 us at 5V), so lower current inputs don't suffice to drive it as a divider; watches use different CMOS. Also, this is about driving a capacitive load, i.e. an electrostatic motor, where there's a torque because rotation raises the capacitance of the most-charged stator elements. That "C" in the formula isn't a constant for this case.

Reply to
whit3rd

C may not be literal capacitance, but shoot-through current equivalent.

Reply to
John Larkin

Don't confuse the details of using chips, vs. designing chips.

You still haven't told us what you are trying to do. Or are you trying to do anything, rather just thinking about the matter?

It's just hard to give this any real thought if you don't give us some details.

Reply to
Ricky

You are confused. 1/2 C V^2 is the energy to charge a capacitor from a power source. Discharging a capacitor draws no power from the source. So only the charging at a rate of F is used in the equation of power. 1/2 C F V^2

Reply to
Ricky

Not all logic has "shoot through" current. The LUTs in an FPGA are transmission gates. They do still have capacitance which must be charged and discharged, but most important is the load capacitance. Even inside a chip, the trace to the next "gate" and the input capacitance is the bulk of the capacitance.

Reply to
Ricky

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.