error detection rate with crc-16 CCITT

Hi

We're using the 68302 micro with DDCMP serial protocol over two wire RS485. According to the user manual, this uses CRC16-CCITT - X**16 X**12 X**5 + 1.

Does anyone have any idea what the chance of getting an undetected error is with this protocol? I know all single bit errors are detected. Supposing we run a point to point connection at slightly faster than it's really capable of and we get 10% of messages with more than a single bit error. What percentage of these will go undetected by the CRC check?

Suppose we run the connection at a "normal" baud rate with almost no errors. What is the likelihood of getting undetected errors now?

Thanks for any help.

Reply to
Shane williams
Loading thread data ...

The Wikipedia article on the "Mathematics of CRC" is short and a good place to start. The paper it references

has the analysis you are looking for. Note (as mentioned in the wikipedia article) that the paper's convention for representing the polynomial differs from the usual method.

--
Rich Webb     Norfolk, VA
Reply to
Rich Webb

The CRC-16 will be able to detect errors in 99.9984 percent of cases. This stems from the code being one value off out of 16-bits of error code count.

65535 / 65536 = 0.999984 percent

See:

formatting link

for some implementation ideas.

-------------

Are you getting some of the errors in your transmission path due to distortion of the RS485 waveform due to non-equal propagation delays through your logic on the "0"-->"1" transition versus the one from "1"-->"0"? Common problem with certain optocouplers. ;-)

--
Michael Karas
Carousel Design Solutions
 Click to see the full signature
Reply to
Michael Karas

6

Thanks. I'm trying to figure out whether it's possible/ viable to dynamically determine the fastest baud rate we can use by checking the error rate. The cable lengths and types of wire used when our systems are installed varies and I was hoping we could automatically work out what speed a particular connection can run at. The spec for the MOC5007 Optocoupler seems a bit vague so I was trying to find a better one.

Reply to
Shane williams

Yes. But:

1) It is easier, faster and more reliable to evaluate the channel by transmitting a known pseudo-random test pattern rather then the actual data.

2) If the baud rate is changed dynamically, how would the receivers know the baud rate of the transmitters?

3) Since the system is intended to be operable even at the lowest baud, why not always use the lowest baud?

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

Packet length for a 16 bit CRCs should be limited to 4kbyte. The CRC doesn´t know at which baud rate the packets are coming. Your assumption ( which may well be true ) is that the error-pattern shifts from singlebit to bursts and more errors will go undetected. But the detected error rate would go way up too. By counting retransmissions now and later at the higher baud rate one could easily see if that has happened and switch to a 24 bit or 32 bit CRC.

MfG JRD

Reply to
Rafael Deliano

It isn't that simple. CRC-16 will be able to detect _all_ 1, 2 and 3 bit errors, and some 4-bit errors. How many 'cases' of four bit errors in a message depends on the message length and your error rate, so right there your fixed percentage of errors detected goes right out the window.

Read the article cited by Rich Webb.

--
Tim Wescott
Wescott Design Services
 Click to see the full signature
Reply to
Tim Wescott

If you creep up on things, looking for one or two bit errors per packet and backing off, then you should do OK. I'm with Vladimir, however, that if you can you should consider just sending pseudo-random sequences. Error counting with those is easy-peasy, and if you know it's coming down the pike you don't have to worry about corrupting data that you depend on.

--
Tim Wescott
Wescott Design Services
 Click to see the full signature
Reply to
Tim Wescott

I've done this -- and it is.

There's ways. Any good embedded programmer should be able to figure out half a dozen before they even put pen to napkin.

If it's like ones that I've worked with, the data over the link is a combination of high-priority "gotta haves" like operational data, and lower-priority "dang this would be nice" things like diagnostics, faster status updates, and that sort of thing.

So the advantages of going up in speed are obvious. For that matter, there may be advantages to being able to tell the a maintenance guy what not-quite-fast-enough speed can be achieved, so he can make an informed choice about what faults to look for.

--
Tim Wescott
Wescott Design Services
 Click to see the full signature
Reply to
Tim Wescott

I've often wondered about that statement. Suppose you get a 1 bit error in the message and an error in the crc remainder that results in a "good" message?

Is there an implicit guarantee in the algorithm that it will take more than 3 bits to "fix" the remainder.

My apologies if this is covered in the Webb article, running late today and don't have time to read it.

Reply to
Jim Stewart

And some devices degrade with age.

You might, instead, want to think of this from the "engineering" standpoint -- what are the likely/expected

*sources* of your errors? I.e., how is the channel typically [1] going to be corrupted.

First, think of the medium by itself. With a given type of cable (including "crap" that someone might fabricate on-the-spot), how will your system likely behave (waveform distortions, sampling skew in the receiver, component aging, etc.).

Then, think of the likely noise sources that might interfere with your signal. Is there some synchronous source nearby that will periodically be bouncing your grounds or coupling directly to your signals (i.e., will your cable be routed alongside something noisey)? [this assumes you have identified any sources of "noise" that your system imposes on *itself*! e.g., each time *you* command the VFD to engage the 10HP motor you might notice glitches in your data...]

Then, think of what aperiodic/transient/"random" disturbances are likely to be encountered in your environment.

In each case, think of the impact on the data stream AT ALL THE DATA RATES YOU *MIGHT* BE LIKELY TO HAVE IN USE. Are you likely to see lots of dispersed single bit errors? How far apart (temporally) are they likely to be (far enough that two different code words can cover them?) Or, will you encounter a burst of consecutive errors? (if so, how wide?)

Finally, regarding your hinted algorithm: note that the time constant you use in determining when/if to change rates has to take into consideration these observations on the likely environment. E.g., if errors are likely to creep in "slowly" (beginning with low probability, low error rate), then you can "notice" the errors and start anticipating more (?) and back off on your data rate -- hopefully, quick enough that the error rate doesn't grow to exceed your *continued* ability for your CRC to remain effective.

OTOH, if the error rate ever "grows" (instantaneously) faster than your CRC is able to detect the increased error rate, you run the risk of accepting bad data "as good". And, sitting "fat, happy and glorious" all the while you are doing so! (i.e., sort of like a PLL locking on a harmonic outside the intended capture range).

Can you, instead, figure out how to *ensure* a reliable channel?

-------------------- [1] and *atypically*!

Reply to
D Yuniskis

w
t

Didn't think about that.

You're exactly right about the need for speed. Background data is fine at the slower rate but when an operator is doing something on the system we want the response to be faster than the slowest rate gives us.

Switching rates seems fairly easy to me. One end tells the other what rate they're switching to, the other acknowledges, if no ack then retry a couple of times. If one end switches and the other doesn't, after one second or so of no communication, they both switch back to the slowest rate.

Reply to
Shane williams

:
s

Interesting points, thanks. The environment can be just about anything. I suspect we'll back off the baud rate fairly quickly once errors start occurring. I'm also thinking we could raise the security for some of the critical messages, like double transmissions perhaps.

Reply to
Shane williams

Packet length is max 270 bytes / 2700 bits or so but critical messages are more like about 50 bytes / 500 bits.

Reply to
Shane williams

One bit error in the message and one in the CRC counts as two bit errors. It's the number of bit errors in _both_ the CRC _and_ the message that you need to count.

--
Tim Wescott
Wescott Design Services
 Click to see the full signature
Reply to
Tim Wescott

Consider carefully what sort of "encoding" you use. E.g., "double transmissions" might add lots of overhead for very little gain in "reliability".

You can [1] also consider dynamically varying the data rate in a TDM sort of scheme -- so, in this timeslot, you run at a slow, reliable rate transfering critical messages; then, in this other timeslot, you run "flat out" pushing data that would be "nice to have" but not critical to proper operation.

Again, you really need to look hard at what you are likely to encounter "in the field" before you can come to any expectations regarding likely performance. I've seen (and have been guilty, myself!) some pretty mangled patches to deployed systems "just to get by until the FedEx replacement parts delivery arrives". If you *might* be running on the bleeding edge in some configuration, the last thing you want is a guy in the field to *think* things are OK when, in fact, they are not.

[e.g., you might want to add a switch that forces communications to stay in the "degraded/secure" mode if you suspect you are not catching all the communication errors in a particular installation... because the tech made a cable out of "bell wire"]

----------------------------

[1] Depends on what is on the other end of the link, of course. But, if you can autobaud dynamically, then that suggests you have some control over both ends of the link!
Reply to
D Yuniskis

Have you thought about about simple heartbeat loopback data packets?

If you get to the situation where too many error bits cannot be detected how will you know everything is alright.

Every once in a while send a small varying pseudo-random data packet at highest speed to various nodes, which will just echo the packet back if decoded correctly. Once received check every bit is correct.

This way you are less likely to have false-positives about data being correct when it is not.

You can change speeds and retry on failures. If you don't see an echo back, you have more problems to resolve.

Sending larger data packets at higher speeds helps to thoroughly check data integrity and more chnce of more data switching frequencies that may or may not be affected.

--
Paul Carpenter          | paul@pcserviceselectronics.co.uk
    PC Services
 Click to see the full signature
Reply to
Paul

y

Yep, it's the same device at both ends.

Regarding double transmissions, what do you mean by "encoding". We could complement all bits in the second transmission I guess.

TDM might not be viable and probably too much hassle I suspect. The baud rate behavior will be user configurable with probably a system wide switch to allow the faster baud rate.

Thanks

Reply to
Shane williams

Thanks for the idea about loop-back data packets. That sounds useful.

The system is a ring of devices with each connection point to point with one device at each end.

Reply to
Shane williams

One approach that I've used in the past is to require an ack/nak for each message sent. If the ack includes the CRC portion of the message that's being acknowledged, then a simple match by the originator against the CRC that it sent gives pretty good confidence that the receiver got a correct message.

The returned CRC is, of course, part of the message body that the remote unit sends which is in its turn used to build that message's CRC.

--
Rich Webb     Norfolk, VA
Reply to
Rich Webb

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.