Uses of Gray code in digital design

Hello,

Most books on digital design discuss Gray codes. However, most of the focus is on generating these codes, rather than detailing their uses.

I read the Wikipedia article:

formatting link
but it doesn't provide enough in-depth information of the uses of Gray code in hardware.

I know Gray codes are used for:

1) Encoding state machine states. Why is it an advantage to use Gray codes here ? 2) Async FIFO addressing between clock domains. Could anyone elaborate on this ? 3) Error correction in digital communications. Again, I'd love to hear some more details about this.

In general, what are the other uses of these codes? When was the last time you needed Gray codes in your design and for what purpose ?

R
Reply to
richard.melikson
Loading thread data ...

Binary-coded counter sequences often change multiple bits on one count transition. That can (will) lead to decoding glitches, especially when counter values are compared for identity. Gray-coded count sequences always change one, and only one, bit on each transition. Comparing two such counters for identity will thus never generate a decoding glitch. That is the reason for Gray codes counters in FIFO addressing, where FULL and EMPTYis established by comparing two asynchronously clocked counters for identity. Gray code addressing is a must for asynchronous FIFOs..

The "one bit change per transition" advantage occurs only in counters, or under other very restrictive circumstances. I do not see an advantage in general state machines, where sequences are not as predictable.

Gray-coded counters use on average half the number of transitions, compared to binary counters. That's a dynamic power advantage.

And yes, Gray is spelled with an a, since it is named for its inventor.

Anybody else have any comments? Peter Alfke

>
Reply to
Peter Alfke

Gray codes were discovered by the French engineer, Emile Baudot, but named after Frank Gray, a Bell Labs researcher, who patented their general application in 1953 (at a time when, I suspect, when Bell Labs folks were busy patenting many other things they didn't invent.)

Jon

Reply to
Jonathan Kirwan

It depends on the state machine. One's with complex state-state jumps, will find it hard to follow a Gray pattern, but if the state engine is Spin-Phase-Sync in nature (ie simple circular states, with 'waits' and decodes ), then Gray can work well.

Gray codes are also common in Absolute Rotary encoders. You could also argue that quadrature encoders, and Stepper Motor drive, are special 2 bit cases of Gray counters.

-jg

Reply to
Jim Granville

One typical use of this code is fotary encoders. The discs inside them are gray-encoded, so they are more rigid, and the encoder output is less glitchy.

Zara

Reply to
Zara

We had four output pins. My software was dieing somewhere round its polling loop,so at each block entry, I changed one of the four bits. Guaranteed the 'scope had the right picture.

Reply to
Bill Davy

I haven't read the wiki page, so I don't have the fuller context for the phrase you quote here. But its first patenting by Frank Gray in

1953 was for shaft encoders. Emile Baudot discovered the idea much earlier, and certainly used it in 1878 in telegraphy, but Gray's patent application (1947?) said that his patented code had "as yet no recognized name." So that was that.

In the case of shaft encoders (or actually anything trying to provide angular position information), it's a big help. Things wear and get old, bits of metal flake off or get cruddy, the contacts aren't evenly loaded, someone is turning the handle by hand, let alone any manufacturing tolerances in building the unit. By Gray coding, you can avoid the confluence of where several contact conditions change at once to represent what is really just a rather minor rotational change. The Gray code only has at most nearby contact change at any angular position. So it either changes, or it doesn't. And it if changes, it still only means a small angular difference, so if it's no longer perfect it's a smaller harm to what is going on than if this had been encoded other ways.

I don't know, not having worried about this. But I assume this refers to, in saying 'async', that the arrival times of different signals aren't necessarily exactly the same and if you have some device monitoring an encoded input and taking some action on that basis, it would be better that as each bit arrives that the functional change is to a "nearby" function (more gradual-appearing in some way.) But this is just a random guess from me.

Now, that is actually a very interesting area. Since I have your interest.....

Let's look at the simple case of just two binary bits:

Bin Gray 00 00 01 01 10 11 11 10

As you know already, the Gray codes only have one bit changing between adjacent entries (or wrapping back to the top from the bottom.)

However, this isn't the only two bit Gray code. There are seven others. Here's another one just as "Gray" as the one above:

Bin Gray 00 11 01 10 10 00 11 01

Same thing, really. I just inverted the bits.

But let's imagine we want to "visualize" the way to find all possible Gray coding sequences. To do that, imagine a 2D square. Place the "00" symbol at one vertex. Place "01" and "10" at the two vertices which are directly connected to that vertex. Now place "11" at the diagonally opposite corner. Now, that places all four possibilities, but does so in a particular fashion. From this square:

00 ----- 01 | | | | | | 10 ----- 11

it's easy to see that to produce all eight possible Gray code sequences for two binary bits, you just select any starting vertex and then go either clockwise or counterclockwise around the square, listing the visited corners out, until you get back to the original corner. Since you have two directions to travel in and four possible starting points, you should see that there are exactly eight possible Gray code arrangements for two bits -- each equally valid.

For three bits, it gets a little more interesting. But by extension, it's just a cube we need. Now you put "000" on one of the corners. Then place "001", "010" and "100" on the vertices which follow directly along one of the three edges leaving that corner. You can finish up the rest by keeping in mind that moving along an edge can only allow one bit to change in the vertex you then arrive at, from the one you left to get there. The cube might look like:

100 ----- 101 /| /| / | / | / | / | 000 ----- 001 | | | | | | 110 ---|--111 | / | / | / | / |/ |/ 010 ----- 011

Now if you place your mental finger at some starting vertex and then imagine trying to trace out all possible paths that will touch each other vertex exactly once and then bring you back to the original vertex, you will find that the number of possible paths is

3*2*(1*2+1*1) or 18. And that's just for one vertex selected at random. Since there are 8 of them, the total is then 8*18 or 144 possible 3-bit gray code sequences.

All these different paths are called Hamiltonian "walks" or "cycles." In general, any particular N-bit Gray code sequence corresponds to one of the many possible Hamiltonian cycles on the corresponding N-dimensional hypercube. If you have any object with (m) vertices in N-d space, the general problem of devising a path that visits every vertex once and only once is called the Hamiltonian circuit problem (after William Rowan Hamilton, who studied the problem on the vertices of a dodecahedron.) Of course, Gray codes are concerned with a special case, that of N-dimensional hypercubes, and not things like dodecahedrons. But the idea is similar and Gray codes are Hamiltonian cycles explicitly dealing with hypercubes.

Since you asked about error correction and digital coding, this then brings me to the idea of a Hamming distance of "1." A Hamming distance of 1 would be the distance between the symbol "000" and symbol "001" or "010" or "100" on this cube. In other words, only requiring the traversal of one edge on a Gray coded n-dim hypercube.

I was going to go on a bit, but I decided to see if there was a Wiki on the subject of Hamming distance and there was! In fact, it includes the darned cube I cobbled up above (kind of.) So that is even better. See:

formatting link

If you imagine the Hamming distance as defined there, and let's say you have

For error detection, one possibility (when single-bit errors are pretty likely) of what you'd like to do is to have a "space" (one of these vast N-Dim hypercube things) that is larger than your need of valid symbols. For example, you might need to represent 0-999 and decide to use 16 bits for this, so your hypercube is a 16-dimensional monstrosity, but basically a "Gray coded" hypercube. What you want then is to distribute your valid symbols throughout the vertices in such a way as to maximize the Hamming distance between each of them. That way, it's less likely that a single bit error will be detected as valid. In fact, if the Hamming distance is at least 2 between any two of your valid symbols, a single bit error cannot "reach" a valid symbol so you will catch the error.

There are many other ways this gets used, by the way. And it isn't all that hard to "see" this.

You may have heard of Hamming error correction codes. It's the idea behind ECC memory, for example, where it not only detects, but also corrects, all single-bit errors -- and it may detect many multi-bit errors. Anyway, let's take a very simple example of how they work.

The Hamming [4,7] code.

Remember the Venn diagram?

formatting link

The one with three overlapping circles, which overlap enough so that there is a small region in the center which is common to all three circles? (Check out the Wiki page, there above.) But here is my dumb ASCII of it:

aaaaa ccccc aaa aaa ccc ccc aa ** cc a c a c a c a c a c 6 a c a 1 c a 2 c a c eeeee a c a c eee eeea c a *e *e c a ec a e c a e c 7 a e c a e c a e c a e c a e c a e c a e c a e 4 c a 5 ec a* ** c* eaaa aaa ccc ccc e e aaaaa ccccc e e e e e e 3 e e e ee ee eee eee eeeee

The overlapping circles are lettered with 'a', 'c', and 'e'. In their overlapping arrangement, they create 7 regions. I've numbered those from 1 to 7, as you can see.

Now, we assign one bit of a seven-bit word to each of these regions. The data bits we want to send (or receive) will be placed into the four regions numbered 4, 5, 6, and 7. Four data bits, total. The error correction bits will be assigned to regions 1, 2, and 3. Those values will be calculated from the data bits.

How to calculate them? First place your data bits into the four regions, 4, 5, 6, and 7. Then consider the sum of those data bits which are in circle 'a' (regions 4, 6, and 7) and force the bit in region 1 to be a parity bit (even or odd, your choice.) So the bit placed into region 1 is determined. Now do the same for the bits in circle 'c' (regions 2, 6, and 7) and force the bit in region 2 to be a parity bit for those, as well. Then do the same for circle 'e' (regions 4, 5, and 7) and compute that parity for region 3. All 7 bits are now determined.

Keep in mind that regions 1, 2, and 3 are only one parity bit, so if you get a carry in adding up the regions just toss the carry away.

Now, you send these seven bits to a receiver somewhere. Imagine there is a 1 bit error in any of these. If the one bit error occurs in region 7 (a data bit), then all three checksums or outer region data bits will fail. You see that? All three included region 7 in calculating their parity value, so if region 7 gets hit and has an error, then all three of the parity bits will be wrong. So if the receiver sees all three parity values are wrong, it knows that the bit in region 7 was incorrectly received. Imagine an error in the bit for region 4 happens. Now, the parity in regions 1 and 3 go wrong. So the receiver can tell that region 4 must have been received in error. Imagine that an error is in the bit for region 2. (That's a parity bit and not data, at all.) In this case, both of the other parity regions, 1 and 3, are still correct. So that means it was just a parity bit that went wrong. The table looks like:

Bit Error Region Parity 1 Parity 2 Parity 3 none ok ok ok 1 err ok ok 2 ok err ok 3 ok ok err 4 err ok err 5 ok err err 6 err err ok 7 err err err

As you can see, the receiver has enough information to detect AND correct any single-bit error that may take place.

This idea can easily be expanded into more Hamming codes, such as the [11,15] or [26,31], or [57,63] codes. For example, 57 data bits can be sent using 63 bits total (only 6 correction/detection bits.)

There's a great chapter "A Gray code for three-digit binary numbers had 2^3 = 8 numbers that can be placed on the corners of a cube. Adjacent corners have binary triplets that differ in only one place. Any continuous path that visits every corner once only generates a Gray code. For example, the path shown by the dashed line starting at 000 produces 000, 001, 011,

101, 100. [Note: his figure of the cube has these numbers on the corners.] This is a cyclic code because the path can return from 100 to 000 in one step. Such paths are called Hamiltonian paths after the Irish mathematician William Rowan Hamilton. As the reader has probably guessed, binary Gray codes correspond to Hamiltonian paths on the cubes of n dimensions."

...and he goes on to say later...

"Gray codes for other bases correspond to Hamiltonian paths on more complicated n-dimensional graphs. The number of Gray codes for any base increases explosively as the number of digits increases. The number of Gray codes, even for the binary system, is known for four or fewer digits."

Gardner had written this in 1976 in his *Scientific American* "Mathematical Games" column. For publication in this book he added several pages of further developments. He also provides an extensive bibliography on the Gray codes and related ideas. As of 1986 the 5-d cube Hamiltonian path numbers were known, and thus the number of 5-bit Gray codes.

A table for these are as follows:

N Vertices Distinct paths Total

----------------------------------------------------------

2 2^2 (4) 2 8 3 2^3 (8) 18 144 4 2^4 (16) 5712 91392 5 2^5 (32) 5,859,364,320 187,499,638,240

Gardner cites a reference for this data:

"At the 1980 IEEE International Conference on Circuits and Computers, at Port Chester, N.Y., a paper was presented titled 'Gray Codes: Improved Upper bounds and Statistical Estimates for n > 4 bits,' and published in 1983. The authors were Jerry Silverman, Virgil E. Vickers and John L. Sampson, electrical engineers at the Rome Air Development Center, Hanscom Air Force Base, Chicopee, Mass."

There's many other closely related studies, such as "sphere packing" in N-dimensions. You might want to look up a paper by Marcel Golay, from 1949. His coding is particularly useful for noisy environments and places a smaller set of valid Golay coded symbols in a much larger symbol space, with the Hamming distance between each of them both relatively uniform and as large as uniformly possible.

I believe that Hamming and Shannon worked close by each other for some years. You might also want to read C. E. Shannon's, "A Mathematical Theory of Communication," The Bell System Technical Journal, July and October of 1948. It's really a fairly easy to read and gives you a nifty segue as well into Boltzmann's discussion on the relationship of temperature and energy and the concept of entropy. (Another guy, Weaver, later pressed Shannon to reach for a wider audience and together they did a paper in 1949.)

That's a big question. Start with the above and see if that helps.

Jon

Reply to
Jonathan Kirwan

Nah, I type fast. ;) The words are mine except what I quoted from Gardner. A small portion of it does come from an earlier post I'd written, though.

Jon

Reply to
Jonathan Kirwan

I hope you copied-and-pasted a lot of that post rather than writing it all yourself!

I hadn't thought about using Gray code for communication, but it's a really smart idea. Supposing you want to send 10-bit data within a

16-bit message. All you need to do to send data "x" is to calculate:

y = toGray16(x

Reply to
David Brown

I meant, "The Gray code only has at most _one_ nearby contact change at any angular position."

Jon

Reply to
Jonathan Kirwan

This technique is useful if you access RAM/ROM linearly as a FIFO. Say for example a video frame buffer. Using gray code instead of straight low to high addressing allows you to stream the data into or out of memory without using a latch. This saves one clock cycle per unit data but considering that data read/write with a latch is two clock cycles (1.change address, 2.latch data) it achieves 100% improvement in throughput.

Reply to
slebetman

Hi Peter, thanks for replying. My comments below...

What do you mean by "compared for identity" - do you mean equality of two multi-bit registers ?

Also, what kind of glitches are you referring to here ? Logic hazards ?

Why can't the values just be synchronized (double FF) between the domains with a handshake and solve the problem of glitches in the first place ?

So why do all synthesis tools propose "gray code" as one of the encodings of state machines ?

R
Reply to
richard.melikson

What do you mean "can work well". Won't normal encoding work just as well ? I can see an advantage in one-hot encoding, but not Gray.

Could you elaborate on this ?

Thanks for the information, btw

R
Reply to
richard.melikson

Gray is useful if you need to decode the states, and also if the state-next terms are async, as you have just ONE dependant FF active, so you cannot get a false state, due to aperture effects. With one-hot, you must ensure trigger signals are fully sync.

On what,the Absolute encoder use, or the Quadrature == 2 bit gray ? Just draw a 2 bit gray counter, and then make it UpDown, and you will see the result.

-jg

Reply to
Jim Granville

Jonathan,

Wow - an amazing post, very informative. Personally I'm familiar with most of this material, but it will definitely serve as a great placeholder for searches on the subject. Google is great in indexing such information reservoirs.

Anyway, although the topic of error correction codes using Hamming distances is enlightening, I don't see how Gray codes help. Maybe it's just morning dense-ness, but it appears that these topics are tangential.

I didn't mean it as a big question. It's quite simple, really - when was the last time *you* Jonathan (and other readers interested in sharing) used Gray codes in digital design, either in coding logic or software ?

TIA R

Reply to
richard.melikson

yes,

They could be, but that would add latencies to the FIFO that are best avoided. ie why compromise the operation, when a Gray Code design will work better ?

-jg

Reply to
Jim Granville

Could you elaborate on this, please - how does it avoid a latch, and where does the latch come from anyway in straight addressing ?

R
Reply to
richard.melikson

2005, for an EEPROM counter.

Jon

Reply to
Jonathan Kirwan

It's more complicated than that. Running a multi-bit value through a synchronizer doesn't work if more than one bit changes. Some of the bits might get there in time when others don't.

If the data changes from 000 to 111, you might see

001, 010, 100, 011, 101, or 110 for a cycle before it settles. (That's assuming it was stable at 000 for several cycles and then stays at 111 for several cycles.)
--
These are my opinions, not necessarily my employer's.  I hate spam.
Reply to
Hal Murray

Ah - this is a most important point, I think. So, you imply that if I want to share a counter between two clock domains, I can so a simple double FF synchronization on it if it's encoded in Gray, whilst for a normal counter I need a handshake protocol with a synchronized control signal ? I think I understand how this can make things faster.

R
Reply to
richard.melikson

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.