Shared Communications Bus - RS-422 or RS-485

With 100 Ohm line driver in the middle sees two parts in parallel, so effectively 50 Ohm. Typical driver impedance is about 40 Ohm, so while mismatched, mismath is not too bad. Also, with multiple devices on the line there will be undesirable signals even if you have termination at both ends.

In unterminated line there will be some loss, so after each reflection reflected signal will be weaker, in rough approximation multiplied by some number a < 1 (say 0.8). After n reflections signal will be multiplied by a^n and for large enough n will become negligible. Termination at given end with 1% resistor means that about 2% will be reflected (due to imperfection). This 2% is likely to be negligible. If transmitter is in the middle, there is still reflection at the end opposite to termination and at the transmitter. But mismatch at transmitter is not bad and the corresponding parameter a is much smaller than in unterminated case. So termination at one end reduces number of problematic reflections probably about 2-4 times. Which means that you can increase transfer rate by similar factor. Of course, termintion at both ends is better, but in multidrop case speed will be lower than in point-to-point link.

Well, multiple receivers on RS-422 have limited usefulness (AFAIK your use case is called 4-wire RS-485), so no wonder that FTDI does not support it. Maybe they have something more expensive that is doing what you want.

That is general thing, not specific to RS-485. If RS-485 receiver puts 24 kOhm load on line, that is about 0.4% of line impedance. When signal passes past receiver there is corresponding power loss. There is also second effect: receiver created discontinuity, so there is reflection. And beside resitive part receiver impedance has also reactive part which means that discontinuity and reflection is bigger than implied by receiver resistance. With lower load recevier effect is smaller, but still there is fraction of percent lost or reflected. Single loss is "very slight", but they add up and increase effective line loss: with single receiver reflecting/losing

0.5 after 40 receivers 20% of signal is gone. This 20% effectively adds to normal line loss.

You probably should check if you can get such rate with short messages. If did little experiment using CH340 and CP2104. That was bi-drectional TTL level serial connection using 15 cm wires. Slave echoed each received character after mangling it a little (so I knew that it really came from the slave and not from some echo in software stack). I had trouble running CH340 above 460800 (that could be limit of program that I used). But using 1 character messages 10000 round trips took about 7s, with small influence from serial speed (almost the same result at 115200 and 230400). Also increasing message to 5 bytes gave essentially the same number of _messages_.

CP2104 was better, here I could go up to 2000000. Using 5 byte messages 10000 round trips needed 2.5s up to 1500000, at

2000000 time dropped to about 1.9. When I increased message to 10 bytes it was back about 2.5s.

I must admit that ATM I am not sure what this means. But this 2.5s looks significant: this means 4000 round trips per second, which is 8000 messages, which in turn is number of USB cycles. So, it seems that normally smallish messages need USB cycle (125 uS) to get trough USB bus. It seems that sometimes more than one message may go trough in a cycle (giving smaller times that I observed), but it is not clear if one can do significantly better. And CH340 shows that it may be much worse.

FTDI is claimed to be very good, so maybe it is better, but I would not count on this without checking. Actually, I remember folks complaining that they needed more than millisecond to get message trough USB-serial.

OTOH, your description suggest that you should be able to do what you want with much smaller message traffic, so maybe USB-serial speed is enough for you.

You may be missing fact that most folks installing network cabling do not know about transmission lines and reasons for matching pairs. And even for folks that understand theory, it is easier to check that colors are in position prescribed in the norm, than to check pairs. So, colors matter because using colors folks can get correct connetion without too much thinking. Why two specs? I think that this is artifact of history and way that standard bodies work. When half of industry is using one way and other half is using different but equally good way standard body can not say that one half is wrong, they must allow both ways.

Reply to
antispam
Loading thread data ...

I don't want to get into a big discussion on termination, but any time a driver is in the middle of the line, it will see two loads, one for each direction of the cable. The termination only impacts the behavior of the reflection. So every driver that is not at the end of the line, will see the characteristic impedance divided by two. However, since the driver is not impedance matched to the line, that should not matter. But to prevent reflections, each end needs to be terminated, to prevent reflections from that end.

The disruptions from the driver/receiver connections of intermediate chips will be small, since they are high impedance and minimal capacitance compared to the transmission line. These signals have multiple ns rise and fall times, so even with no terminations, it is unlikely to see effects from reflections from the ends of the line, much less the individual connections.

Multidrop is a single driver and multiple receivers. Multipoint is multiple drivers and receivers. One line will be multidrop (from PC) and the other multipoint (to the PC). The Multidrop will be single terminated since the driver needs no termination, it's impedance is well below the line impedance. The Multipoint line has a termination in the FTDI device on the receiver. Another termination will be added to the far end of the run. This is mostly insurance. I would not expect trouble if I used no terminators. I could probably use a TTL level serial cable and no RS-422 interface chips. But that's going a bit far I think. Using RS-422 is enough insurance to make the system work reliably.

??? Who said FTDI does not support multiple receivers? Oh, you mean their cables only. I'm not sure why you say this has limited usefulness. But whatever. That's not a thing worth mentioning really.

I'm not using FTDI anyplace other than the PC, so their device does exactly what I want. The only other, differential cable is RS-485, which I don't want to use as you have to pay more attention to timing of the driver enables.

If you are talking about the load resistance, that is trivial enough to be ignored for signal loss. The basic RS-422 devices are rated for 32 loads, and the numbers in the FTDI data sheet (54 ohms load) are with a pair of 120 ohm resistors and 32 loads.

The "reactive" part of the receiver/driver load is capacitive. That does not change with the load value. It's mostly from the packaging is my understanding, but they don't give a value in the part data sheet. I expect there's more capacitance in the 6 foot cable than the device. I don't know how you come up with the loss number.

I ran the numbers in one of my posts (here or in another group). My messages are around 10 char with the same echo or three more characters for a read reply. Assuming 8 kHz for the polling rate, an exchange would happen at 4 kHz. A total of 25 char gives 100 kchar/s or 800 kbps on USB or 1,000 kbps on the RS-422/RS-485 interface. So I would probably want to use something a bit faster than 1 Mbps. I think 4k messages per second will be plenty fast enough. With 128 UUT in the system that's 32 commands per second per UUT.

I may want to streamline the protocol a bit to incorporate the slave selection in every command. This will be more characters per message, but more efficient overall with fewer messages. The process can be to send the same command to every UUT at the same time. Mostly this is just not an issue, until the audio tests. They take some noticeable time to execute, as they collect some amount of audio data. I might add a test for spurs, since some UUT failures clip the sinewaves due to DC bias faults and harmonic distortion would be a way to check for this. I want the testing to diagnose as much as possible. This would add another slow test. So these should be done on all UUT in parallel.

I used to use CH340 cables with my test fixture, but they would stop working after some time, hours I think. I think the cable had to be unplugged to get it working again. Once I realized it was the CH340 cable/drivers, I got FTDI devices and never looked back. They are triple the price, but much, much cheaper in the long run.

It's too early to be testing, but I will get to that. I suppose I could do loopback testing with the RS-232 cable I have now.

If it doesn't run at the speed I'm thinking, it's not a big loss. There's no testing at all done with the current burn in chassis. The UUTs are tested one at a time. You can't get much slower than that. Even if it takes a minute to run a full test, that's on all 128 UUTs in parallel and it will be around 1000 times faster than what we have now! The slow part will be getting all the UUTs loaded on the test fixtures and getting the process started. Any bad UUTs will need to be pulled out and tested/debugged separately. Once they are pulled out, the testing runs until the next day when the units are labeled with a serial number and ready to ship!

The people using the cables don't see the colors. They just plug them in.

But it's not different, really. It's just colors that mean nothing to anyone actually using the cables. They just want to plug them in and make things work. The color of the insulator won't change that at all.

If there was something different about the wiring, then I'd say, I get it. But electrically they are identical.

It's also odd, that the spec doesn't say how many turns per foot/meter are in the twisted pair. But it is different in each pair to give less crosstalk.

Reply to
Rick C

There are two levels of framing here, and two types of pauses.

For UART communication, there is the "character frame" and the stop bit acts as a pause between characters. This is to give a minimum time to allow re-synchronisation of the clock timing at the receiver. It also forms, along with the start bit, a guaranteed edge for this re-synchronisation. More sophisticated serial protocols (CAN, Ethernet, etc.) do not need this because they have other methods of guaranteeing transitions and allowing the receiver to re-synchronise regularly - thus they do not need framing or idling at the character or byte level.

But you always want framing and idling between message frames at a higher level. You always have an idle period that is longer than any valid character or part of a message.

For example, in CAN communication you have "bit stuffing" any time you have 5 equal value bits in a row. This ensures that in the message, you never have more than 5 bits without a transition, and you don't need a fixed start or stop bit per byte in order to keep the receiver synchronised. But at the end of the CAN frame there is at least 10 bits of recessive (1) value. Any receiver that has got out of synchronisation, due to noise, startup timing, etc., will know it cannot possibly be in the middle of a frame and restart its receiver.

In UART communication, this is handled at the protocol level rather than the hardware (though some UART hardware may have "idle detect" signals when more than 11 bits of high level are seen in a row). Some UART-based protocols also use a "break" signal between frames - that is a string of at least 11 bits of low level.

If you do not have such pauses, and a receiver is out of step, it has no way to get into synchronisation again. Maybe you get lucky, but basically all it is seeing is a stream of high and low bits with no absolute indicator of position - and no way to tell what might be the start bit of a new character (rather than a 1 bit then a 0 bit within a character), never mind the start of a message.

Usually you get enough pauses naturally in the communication, with delays between reception and reply. But if you don't have them, you must add them. Otherwise your communication will be too fragile to use in practice. You /need/ idle gaps to be able to resynchronise reliably in the face of errors (and there is /always/ a risk of errors).

It will be in the right state at the right time, as long as it enters it when the stop bit is identified (half-way through the stop bit) rather than artificially waiting for the end of the bit time.

You need gaps in the character stream at a higher level, for error recovery.

Reply to
David Brown

I'm making the assumption that you are using appropriate hardware. No processor, just a USB device that has a "transmitter enable" signal on its UART.

I'm getting the impression that you have never heard of such a UART (either in a USB-to-UART device, or as a UART peripheral elsewhere), and assume software has to be involved in enabling and disabling the transmitter. Please believe me when I say such UARTs /do/ exist - and the FTDI examples I keep giving are a case in point.

Yes, and it is a /solved/ issue if you pick the right hardware.

A single transmitter, while sending a multi-character message, does not need any delay between sending the full stop bit and starting the next start bit. That is obvious. And that is why a "transmission complete" signal comes at the end of the start bit sent on the transmitter side. On the receiver side, the "byte received" signal comes in the /middle/ of the stop bit, as seen by the receiver, because that could be at the /end/ of the stop bit as seen by the transmitter due to clock differences. (It could also be at the /start/ of the stop bit as seen by the transmitter.) The receiver has to prepare for the next incoming start bit as soon as it identifies the stop bit.

But you want an extra delay of at least 11 bits (a character frame plus a buffer for clock speed differences) between messages - whether they are from the same transmitter or a different transmitter - to allow resynchronisation if something has gone wrong.

I've explained in other posts why inter-message pauses are needed for reliable UART communication protocols. They don't /need/ to be as long as 35 bit times as Modbus specifies - 11 bit times is the minimum. If you don't understand this by now, then we should drop this point.

It doesn't matter whether things are software, hardware, or something in between.

Yes, with the bus you have described, and the command/response protocol you have described, there should be no problems with multiple transmitters on the bus, and you have plenty of inter-message idle periods.

However, this Usenet thread has been mixing posts from different people, and discussions of different kinds of buses and protocols - not just the solution you picked (which, as I have said before, should work fine). I think this mixing means that people are sometimes talking at cross-purposes.

There is no point in having a terminator at a driver (unless you are talking about very high speed signals with serial resistors for slope control). You will want to add a terminator at the far end of both buses. This will give you a single terminator on the PC-to-slave bus, which is fine as it is fixed direction, and two terminators on the slave-to-PC bus, which is appropriate as it has no fixed direction.

(I agree that your piece of string is of a size that should work fine without reflections being a concern.)

The speed of a signal in a copper cable is typically about 70% of the speed of light, giving a minimum round-trip time closer to 45 ns than 30 ns. Not that it makes any difference here.

Reply to
David Brown

There may be issues with minimum total length for Ethernet, but I have not heard of figures myself - usually maximum lengths are the issue. It's common to have racks with the wiring coming into patch panels, and then you need a short Ethernet cable to the switch. These cables should ideally be short - both from a cable management viewpoint, and because you always want to have as few impedance jumps as possible in the total connection between switch and end device and you want the bumps to be as close to the ends as possible.

30 cm patch cables are common, but I've also seen 10 cm cables. For the very short ones, they need to be made of very flexible material - standard cheap Ethernet cables aren't really flexible enough to be convenient to plug in and out unless you have a little more length.
Reply to
David Brown

<<< snip >>>

You have failed to explain how a receiver would get "out of step". The receiver syncs to every character transmitted. If all characters are received, what else do you need? How does it get "out of step"?

I have no idea what you are talking about. You have already explained above how every character is framed with a start and a stop bit. That gives a half bit time of clock misalignment to maintain sync. What would cause getting out of step?

With the protocol involved, the characters for commands are unique. So if a devices sees noise on the line and does get out of sync at framing characters, it would simply not respond when spoken to. That would inherently cause a delay. So all data after that would be received correctly.

The reason I'm using RS-422 instead of TTL, is the huge improvement in noise tolerance. So if the noise rate is enough to cause any noticeable problems, there's a bad design in the cabling or some fundamental flaw in the design and needs to be corrected. Actually, that makes me realize I need to have a mode where the comms are exercised and bit errors counted.

You haven't made your case. You've not explained how anything gets out of sync. What is your use case? But you finally mention "errors". Are you talking about bit errors in the comms? I've addressed that above. It is inherently handled in a command/response protocol, but since the problem of bit errors should be very, very infrequent, I'm not worried.

That depends entirely on what is being done with the information. Start bit detection should start as early as possible. Enabling the transmitter driver after the last received character should not happen until the entire character is received, to the end of the stop bit.

If the bus has fail-safe provisions, it's actually ok for the transmitter to disable the driver at the middle of the stop bit. The line will already be in the idle state and the passive fail-safe will maintain that. Less chance of bus contention if the next driver is enabled slightly before the end of the stop bit.

If you have errors. I like systems without errors. Systems without errors are better in my opinion. I'm just sayin'. But it's handled anyway.

Reply to
Rick C

How can there not be a processor? I'm using a split bus, with the PC master driving all the slave receivers and all the slave transmitters sharing the PC receive bus.

Is the PC not a processor?

The slaves have no USB.

You are not being clear. I don't know and don't care what is inside the FTDI device. That's just magic to me, or it's like something inside the black hole, unknowable. More importantly, there is no transmitter enable on the RS-422 driver in the FTDI device, because it's not tristateable.

??? Are you talking about the buffer management signals for the software?

Again, this depends entirely on what this signal is used for. For entering the state of detecting the next start bit, yes, that is the perceived middle of the stop bit.

Again, you seem to not understand the use case. The split bus never has messages back to back on the same pair. It gets confusing because so many people have tried to talk up RS-485 using a single pair. In that case, everything is totally different. Slaves need to wait until the driver has stopped driving the bus, which means an additional bit time to account for timing errors. But RS-485 is not being used. Each bus is simplex, implementing a half-duplex protocol on the two buses.

You are assuming a need for error tolerance. But a munged message is the problem, not resyncing. A protocol to detect an error and retransmit is very messy. I've tried that before and it messes up the protocol badly.

Of course it does. Since the slaves are all logic, there is no need for delays, at all. The slave driver can be enabled at any time the message has been received and the reply is ready to go.

Yes, it gets confusing.

It does if you are using it in a shared bus with multiple drivers. The line should still be organized as linear with minimal stubs and a terminator on each end. This is not my plan, so maybe I should stop discussing it.

The problem I have now is finding parts to use for this. These devices seem to be in a catagory that are hit hard by the shortage. My product uses the SN65C1168EPW, which is very hard to find in quantity. My customer has mentioned 18,000 units next year. I may need to get with the factory and see if they can supply me directly.

Reply to
Rick C

Sure, the PC is a processor. It sends a command to the USB device, saying "send these N bytes of data out on the UART ...".

The USB device is /not/ a processor - it is a converter between USB and UART. And it is the USB device that controls the transmit enable signal to the RS-485/RS-422 driver. There is no software on any processor handling the transmit enable signal - the driver is enabled precisely when the USB to UART device is sending data on the UART.

As I mentioned earlier, this thread is getting seriously mixed-up. The transmit enable discussion started with /RS-485/ - long before you decided to use a hybrid bus and a RS-422 cable. You were concerned about how the PC controlled the transmitter enable for the RS-485 driver, and I have been trying to explain how this works when you use a decent UART device. You only confuse yourself when you jump to discussing RS-422 here, in this bit of the conversation.

The FTDI USB to UART chip (or chips - they have several) provides a "transmitter enable" signal that is active with exactly the right timing for RS-485. This is provided automatically, in hardware - no software involved. If you connect one of these chips to an RS-485 driver, you immediately have a "perfect" RS-485 interface with automatic direction control. If you connect one of these chips to an RS-422 driver, you don't need direction control as RS-422 has two fixed-direction pairs. If you buy a pre-built cable from FTDI, it will have one of these driver chips connected appropriately.

No.

Yes.

Yes, I understand your new use case, as well as the original discussions and the side discussions. I don't think /you/ understand that there had been a change, because you seem to imagine everything in the thread is in reference to your current solution.

I agree. I know how your solution works, and have said many times that I think it sounds quite a good idea for the task in hand.

All communications have failures. Accept that as a principle, and understand how to deal with it. It's not hard to do - it is certainly much easier than trying to imagine and eliminate any possible cause of trouble.

I'm sorry you don't understand, and I can't see how to explain it better than to say timing and delays are fundamental to the communication, not the implementation.

There has, I think, been some interesting discussion despite the confusion. I hope you have got something out of it too - and I am glad that you have a bus solution that looks like it will work well for the purpose.

Ideally, a bus should be (as you say) linear with minimal stubs and a terminator at each end - /except/ if one end is always driven. There is no point in having a terminator at a driver. Think about it in terms of impedance - the driver is either driving a line high, or it is driving it low. At any given time, one of the differential pair lines will have almost 0 ohm resistance to 0V, and the other will have nearly 0 ohm resistance to 5V. When the signal changes, these swap. Connecting a

100 ohm resistor across the lines at that point will make no difference whatsoever. The terminator is completely useless - it's just a waste of power. At the other end of the cable it's a different matter - there's a cable full of resistance, capacitance and inductance between the terminator and the near 0 ohm driver, so the terminator resistor /does/ make a difference.

In more sophisticated tristate drivers, you would off (disconnect) the local terminator whenever the driver is enabled. This is done in some multi-lane systems as it can significantly reduce power and make slope control and pulse shaping easier. (It's not something you'd be likely to see on RS-485 buses.)

Unfortunately, sourcing components these days is a much harder problem than designing the systems.

Reply to
David Brown

Actually, the FTDI device is a processor. I expect it actually has no UART, rather the entire thing is done in software. I recall there being code to download for various purposes, such as JTAG, but I forget the details. I'm pretty sure the TxEn is controlled by FTDI software.

Ok, I'll stop talking about what I am doing.

Ok, thanks.

Ok, then the conversation has reached an end.

That's not a premise I have to deal with. I will also die. I'm not factoring that into the project either.

I don't need to eliminate "any possible cause of trouble". I only have to reach an effective level of reliability. As I've said, error handling protocols are complex and subject to failure. It's much more likely I will have more trouble with the error handling protocol than I will with bit errors on the bus. So I choose the most reliable solution, no error handling. So without an error handling protocol in the software, I don't need to do anything further to deal with errors.

I understand perfectly. I only need to meet the requirements of this project. Not the requirements of some ultra high reliability project. With the RS-422 interface, I expect I could run the entire system continuously, and would not find an error in my lifetime. That's good enough for me.

Indeed.

Reply to
Rick C

No, I think you are mixing things up. FTDI make a fair number of devices, including some that /are/ processors or contain processors. (That would their display controller devices, their USB host controllers, amongst others.)

The code for using chips like the FT232H as a JTAG interface runs on the host PC, not FTDI chip - it is a DLL or so file (or OpenOCD, or other software). The chip has /hardware/ support for a few different serial interfaces - SPI, I²C, JTAG and UART.

We don't need to stop talking about it - we (everyone) just need to be a bit clearer about the context. It's been fun to talk about, and its great that you have a solution you are happy with, but it's a shame if topic mixup leads to frustration.

I agree that error handling procedures can be difficult - and very often, they are poorly tested and have their own bugs (hardware or software). Over-engineering can reduce overall reliability, rather than increase it. (A few years back, we had a project that had to be updated to SIL safety certification requirements. Most of the changes reduced the overall safety and reliability in order to fulfil the documentation and certification requirements.)

For serial protocols, ensuring a brief pause between telegrams is extremely simple and makes recovery possible after many kinds of errors. That's why it is found in virtually every serial protocol in wide use. And like it or not, you have it already in your hybrid bus solution.

Reply to
David Brown

They need code for the PC to run, but there is no reason to think they don't use a processor in the USB dongle.

There's no point to inter-message delays. If there is an error that causes a loss of framing, the devices will see that and ignore the message. As I've said, the real issue is that the message will not be responded to, and the software will fail. At that point the user will exit the software on the PC and start over. That gives a nice long delay for resyncing.

Reply to
Rick C

There is no reason to think that they /do/ have a processor there. I should imagine you would have no problem making the programmable logic needed for controlling a UART/SPI/I²C/JTAG/GPIO port, and USB slave devices are rarely made in software (even on the XMOS they prefer hardware blocks for USB). Why would anyone use a /processor/ for some simple digital hardware? I am not privy to the details of the FTDI design beyond their published documents, but it seems pretty clear to me that there is no processor in sight.

That is one way to handle possible errors.

Reply to
David Brown

If the only way to handle a missed message is to abort the whole software system, that seems to be a pretty bad system.

Note, if the master sends out a message, and waits for a response, with a retry if the message is not replied to, that naturally puts a pause in the communication bus for inter-message synchronization.

Based on your description, I can't imagine the master starting a message for another slave until after the first one answers, or you will interfere with the arbitration control of the reply bus.

In a dedicated link, after the link is established, it might be possible that one side just starts streaming data continously to the other side, but most protocals will have some sort of at least occational handshaking back, so a loss of sync can stop the flow to re-establish the syncronization. And such handshaking is needed if you have need to handle noise in packets.

Reply to
Richard Damon

Once you acknowledge that noise and errors are even possible, some kind of checksums or FEC seem appropriate in addition to a retry protocol.

Reply to
Paul Rubin

Yes, the messages should have some form of checksum in them to identify bad packets. That should be part of the message definition.

Reply to
Richard Damon

On 2022-11-05 Rick C wrote in comp.arch.embedded: ...

Yes, only difference is the colors. There are some historical backgrounds, see also

formatting link
In the early days there sometimes was a need for crossover cables. 568A on one end, 568B on the other end. IIRC, you needed one to connect 2 PC's together directly, without a hub. Hubs also had a special uplink port.

These days all ethernet PHY's are auto detect and there is no need for special ports or cables anymore. So pick a standard you like or just use what is available. Most cables I have in my drawer here seem to be

568B. Just standard cables, did not pay attention to the A/B when I bought them. ;-)
Reply to
Stef

I don't agree. These interfaces are not so simple when you consider the level of flexibility in implementing many different interfaces in one part. XMOS is nothing like this. A small processor running at high speed would easily implement any of these interfaces. The small processor can actually be a very small amount of chip area. Typical MCUs are dominated by the memory blocks. With a small memory an MCU could easily be smaller than dedicated logic. Even many of the I/O blocks, like UARTs, can be larger than an 8 bit CPU. A CPU takes advantage of the massive multiplexer in the memory, which is implemented in ways that use very little area. FPGAs use the multiplexers in tiny LUTs while an MCU uses the multiplexer in a single, much larger LUT, the program store.

Reply to
Rick C

On 2022-11-05 Rick C wrote in comp.arch.embedded:

I have seen this happen in long messages (few kB) with no pauses between characters and transmitter and receiver set to 8,N,1. It seemed that the receiver needed the complete stop bit and then immediately saw the low of the next start bit. Detecting the edge when it was ready to see it, not when it actually happened. When the receiver is slightly slower than the transmitter, this caused the detection of the start bit (and therefor the whole character) to shift a tiny bit. This added up over the character stream until it eventually failed.

Lowering the baud rate did not solve the issue, but inserting pauses after a number of chars did. What also solved it was setting the transmitter to 2 stop bits and the receiver to one stop bit. This was a one way stream and this may not be possible on a bi-directional stream.

I would expect a sensible UART implementation to allow for a slightly shorter stop bit to compensate for issues like this. But apparently this UART did not do so in the 1 stop bit setting. I have not tested if setting both ends to 2 stop bits also solved the problem.

Reply to
Stef

You would certainly think that if your error rate was more than once a hundred years. I expect to be long dead before an RS-422 bus only 10 feet long burps a bit error.

The pause is already there by virtue of the protocol. Commands and replies are on different busses.

Exactly! Now you are starting to catch on.

Except that there is no data to stream. Maybe you haven't been around for the full conversation. The protocol is command/reply for reading and writing registers and selecting which unit the registers are being accessed. The "stream" is an 8 bit value.

??? Every command has a reply. How is that not a handshake???

Reply to
Rick C

Why? Does the processor checksum every value calculated and stored in memory? Not on my computer. This is not warranted because the data failure rate is very low. Same with an RS-422 bus in an electrically quiet environment. I could probably get away with TTL level signals, but I'd like to have the ESD protection these RS-422 chips give. That additional noise immunity means there is an extremely small chance of bit errors. If we have problems, the error handling can be added.

Reply to
Rick C

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.