FEC - Forward Error Correction on a PIC?

Would a PIC be a suitable device for converting Asynchronous RS232 in to a FEC encoded bitstream, and at the other end, take the dodgy bitstream, error correct it and convert it back to rs232?

Thanks in advance

Marky C

Reply to
MarkyC
Loading thread data ...

That depends largely on the data rate, the maximum error burst length you want to correct, and how much (if any) interleave you want.

It's certainly possible to code some FEC algorithms for the PIC. At one time I wrote a Reed-Solomon encoder, but I haven't written a decoder. (The code belongs to a client so don't bother asking me for it.)

If you're only expecting an occasional single-bit error, you could use a simple Hamming code, but if you do that on individual bytes it will increase the necessary channel data rate by 50%.

It's also possible to correct single-bit errors in blocks of data using a normal CRC. For instance, if you have a block of less than 8 Kbytes, you can correct a single bit error using CRC-16, but you will not necessarily be able to detect mulit-bit errors.

Reply to
Eric Smith

a

error

Thanks for the reply.

My data rate is low at 1200bps, and the majority of the errors are single bit. I come from the software side of life and therefore the PIC capabilities are the mystery. My data originates as rs232 8n1, which is hugely suseptable to cataclysmic failure when a single bit error hits one of the start or stop bits. Synchronisation at the rx UART is lost for 1, 2 or

3 chars depending on how it feels.

So thats my problem, a single bit error is causing me to loose 16 or more. I wondered if a PIC could be used to takes the rs232, convert it to a bitstream, tx it and correct the errors before presenting it to the UART.

I feel that correcting the odd bit is a lot easier that chuncks of my data going missing.

Does this still sound like a problem for a PIC. Is it possible for rxing PIC to synchronise its clock in line with the incoming bitstream so that individual bits can be gleaned from the stream?

Thanks again Marky C

Reply to
MarkyC

... snip ...

If you are having problems with start/stop bits, I would think that sending with two stop bits, and receiving with one, would ensure good syncs and immunize against most data rate mismatches. If problems still show up I would examine the UART start bit detection routines. After that you might be able to use 8 bits + parity, together with a block parity byte, for a simple one bit error correction method.

At 1200 bps you must have an incredibly noisy link to have serious difficulties.

--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
     USE worldnet address!
Reply to
CBFalconer

A CRC doesn't correct bit errors, it merely detects them. A CRC error then causes a retransmit of the packet.

Rene

--
Ing.Buero R.Tschaggelar - http://www.ibrtses.com
& commercial newsgroups - http://www.talkto.net
Reply to
Rene Tschaggelar

Snip

of

or

Snip The other possibility is to look for a FEC that can hande a burst error of

24 bits.

Wim

Reply to
Wim Ton

With sufficient computational power (or low transmission speeds), if the CRC does not match, you could always try and invert a single bit in the message and recalculate the CRC. If the CRC does not match, try to invert some other bit and repeat the CRC calculation and so on. If the message size is 1kB, the CRC has to be calculated 8000 times.

If the CRC match with only one permutation, you have the most likely have the correct combination. If no combination match, there are more than a single bit error. However, if more than one combination match, you have the problem of selecting the most likely.

This approach is realistic, if you have a quite low bit error rate (BER) and the cost of retransmission is huge (especially in space communication, in which the delay can be hours).

Paul

Reply to
Paul Keinanen

I know Eric well enough to know it would be unlikely for him to make this kind of error in his post, a quick google search on "CRC error correction" turned this up as the second hit:

formatting link

-Zonn

-------------------------------------------------------- Zonn Moore Zektor, LLC

formatting link

Remove the ".AOL" from the email address to reply.

Reply to
Zonn

No problem. You can certainly implement some relatively simple FEC on PIC. Take a look at our web site for examples.

Vladimir Vassilevsky

DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

Rene Tschaggelar writes:

A CRC *can* be used to correct errors. It just normally isn't done that way.

That's typical, but it's certainly not required by anything in the fundamental nature of the CRC.

Reply to
Eric Smith

Hi Rene! You should read textbooks before posting nonsense! Sorry, it's almost completely false what you said.

- Henry

Rene Tschaggelar schrieb >> "MarkyC" writes:

a

error

Reply to
Henry

I don't doubt it is possible.

How many bit faults would a larger hash, like SHA-1 or MD5 be able to 'correct'?

Anyway, why would you want to, when there are much better error correcting codes that need no brute force approach.

Wumpus

Reply to
Wumpus

I wrote:

Paul Ke> With sufficient computational power (or low transmission speeds), if

There are *much* more efficient ways for this to be done. Assuming that there is only a single-bit error, if you XOR the received CRC with the locally computed one to produce a syndrome value. The syndrome uniquely identifies the bit position in error. One approach is to have a table mapping the XOR result to the offset and bit number of the error, but this burns a lot of memory in the case of CRC-16.

There are slightly more complex approaches that do involve iteratively searching for the correction, but do not require recalculating the CRC over the entire block for each iteration. The basic idea is that you can trivially compute the CRC of a block with the first bit of the first byte set, and all other bits clear. If that doesn't match the syndrome, you can run one iteration of the CRC algorithm to get the CRC of a block that would have only the second bit of the first byte set. And so on. At most, this requires as much computation as doing a bit-serial CRC calculation of the data block.

Note that these approaches are also sometimes used with other cyclic error correction codes; there's nothing too special about CRC in this regard. But by exploiting the mathematical properties of a particular error correction code, it is often possible to come up with more efficient correction algorithms.

Eric

Reply to
Eric Smith

error

I am sorry to complain about a fine showing of replies, but my main worry is not the implementation of the fec, but rather how do you receive a bitstream. How do you synchronise the receiver so that its bit centres are aligned with the tx station. Oversampling seems a way to go, maybe oversample by 8 times and use those 8 samples to work out where the leading edges of the bits are. This could be constantly fed as a bias to the timer so that the rx & tx clocks are aligned. This I guess would also allow for a drift between the two frequencies.

Thanks again

Marky C

Reply to
MarkyC

... snip ...

The usual method is to initially sample at some submultiple of the bit period, ideally odd, but usually even. Say the multiple is

  1. You can detect the start bit either by an interrupt on the bit, or by sampling on clock intervals. You now wait 3 units and resample to confirm the start bit. From there on you resample at
7 units, putting the samples at the nominal bit centers. The stop bit is detected at the center point, i.e. 1/2 bit time, and the system resets. If the stop bit is missing you have either a framing error or a break, depending on the previous bits, and must wait for a true stop before continuing.

The most common multiple used is 16, which is why UART chips normally require baud clocks at 16x.

--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
     USE worldnet address!
Reply to
CBFalconer

Hi

First have a look at the hardware setup. Perhaps a simple switch to RS422 with twisted pair wiring and balanced signalling will solve the signal integrity if you're not already using that. Otherwise I suggest you take a look at short haul modems(google for it). Even if your product cost or other reasons may not allow you to buy modems off the shelf, the signalling techiques used are probably valuable. Use a simple FSK or PSK encoding together with a short trellis encoder/decoder such as Viterbi. This is definitely possible to implement on a small PIC or AVR for both Tx and Rx.

/Pär

Reply to
Par

Async receiving is of course an essential task in this app, but it's not (too) complicated. The receiver 'clock' doesn't need to start until one sees the leading edge of the start bit. You shouldn't need to detect the edges of the other bits (depending on the data, those edges may not be there anyway). If the last data and stop bits don't happen in the 'window' you're looking at, then the incoming data is clocked at a frequency that's out of spec or at the wrong baud rate. The first data bit is sampled at 1.5 bit times after the start bit's leading edge, the second bit at 2.5 bit times, etc. For higher reliability, using the 8-samples-per-bit time, you can sample at three places in the middle of the bit time, and decide that the received bit is the value of the majority of bits. I recall reading the data sheet for an old hardware UART that receives this way. Here's a diagram of the idea, where | is the bit edge time, and * is a sample time:

0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 ... | * * * | * * * ...

The async transmitter should operate on its own independent 'clock' that gives the proper baud rate. It's much easier than the receiver.

Reply to
Ben Bradley

Well thanks for all your replies. I now feel that a PIC is up to the job, I will have to get off my behind and write some code.

Thanks,

Marky C

Reply to
MarkyC

In the past I did precisely that with an AT90S2313 AVR.

Reply to
jetmarc

The problem is not with FEC, it's the UART synchronization. Say the

2nd byte of a 100 byte packet is 0xFF, and the start bit is missed due to a single bit error. You will miss the whole byte, and thus all remaining 98 bytes will be mis-aligned by 1 byte. These are 784 error bits, caused by one single bit error. You might even take the next packet's first byte for the last one of the earlier packet, and carry the error over to the next packet as well.

The only reliable way around this problem is to not use an UART. Use a synchronization frame instead (eg a pseudorandom "magic" token, possibly with a 01010101 preamble for the analog->digital treshold comparator). Compare the incoming data with the token, using XOR and accept the same number of biterrors per incoming bits, as the FEC. Then there remains no weak point, every bit is equally vulnerable (both in "start" and "data" section of the telegram). Either the complete packet goes through, or nothing.

Marc

Reply to
jetmarc

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.