You asked for suggestions and I gave some.
You asked for suggestions and I gave some.
That appears to be an RJ45 like ethernet cables use. The little locking tabs break off all the time, and also the cable gets kinked up after repeated flexing where it goes into the connector. Strain relief helps but it happens anyway. You might buy some ready made ethernet cables rather than putting those connectors on yourself. At least with cheap crimpers, the ready made cables are often more reliable than DIY ones.
They do make those magnetic connectors with varying numbers of pins.
Here is a CAN cable, no idea if that is of interest, but it uses the OBD connector found in cars:XLR or DIN style plugs/sockets might also be something to consider.
There is also this style, popular with the mechanical keyboard crowd:
I'm using RS-422 because I don't need to learn how to use a "chip". It's the same serial protocol I'm using now, but instead of RS-232 voltage levels, it's RS-422 differential. The "change" is really the fact that it's not just one slave. So the bus will be split into a master send bus and a slave reply bus. The master doesn't need to manage the tri-state output because it's the only talker. The slaves only talk when spoken to and the UART is in an FPGA, (no CPU), so it can manage the tri-state control to the driver chip very easily.
CAN bus might be the greatest thing since sliced bread, but I am going to be slammed with work and I don't want to do anything I don't absolutely have to.
A lot of people don't understand that this is nearly the same as what I'm using now and will only require a very minor modification to the message protocol, to allow the slaves to be selected/addressed. It would be hard to make it any simpler and this would all still have to be done even if adding the CAN bus. The slaves still need to be selected/addressed.
Thanks for the suggestions. The part I'm worried about now are the more mechanical bits. I am thinking of using the Eurocard size so I can use the rack hardware, but I know very little about the bits and bobs. There will be no backplane, just card guides and the front panels on the cards to hold them in place. I might put the cabling on the front panel to give it easy access, but then it needs machining of the front panel. I could simplify that by cutting out one large hole to expose all the LEDs and connectors. I want to make the design work as simple as possible and mechanical drawings are not my forte.
Ok, thank you for your suggestions.
The cable will be three inches long. I can make more.
If you are talking about the big, black connector, it is bigger than the board. This will be a Eurocard rack with 4HP or 0.8 inch spacing. RJ-45 barely fits.
DIN? You mean those things that are used on Eurocards with some 96 pins? What would mate with it? What's actually wrong with RJ-45?
Way too much work. This is a jumper connector to go between boards that are 0.8 inches on centers. It simply doesn't require that much effort. They will be plugged and unplugged, on average, 1.1 times a day. I think RJ-11 will hack it. I'd rather have something that breaks and is very easy to replace, than something that breaks less often, but is much harder to repair or replace.
No I meant the circular connectors like you see on old PC keyboards, similar to the aviation style one that I linked.1) the plugs break and the cables get munged up, but as you can say you can replace them when they do. 2) the sockets also break, maybe not as often, but replacing them might be harder, depending
If both of those are ok with you, then maybe it a good choice.
I suppose the plugs can break, but I've never seen a broken RJ-45, other than the catch breaking. That's nearly always a result of pulling on a cable through a tangle rather than freeing it gently. On the other hand, I have seen broken DIN mouse connectors. The way they protrude, they get bumped and one or the other is damaged. I think the metal on the chassis mounted connector is optional or something, that can make it more fragile. Anything can break, but I need to be concerned with significant problems. I think RJ-45 will be very adequate.
I've never seen an RJ-45 plug broken. They are used widely in the telecom industry as RS-232 connectors for consoles.
Yeah, I'm fine with a cable I can make to any length I want in 5 minutes, with most of that spent finding where I put the parts and tool. Oh, and costs less than $1.
If there was an easier way to make a DIN connector, I'd be ok with that. Anything crimp or solder pin is going to be a PITA. Heck, I'd be ok with a ribbon cable actually, but it would be larger than an RJ-45 since the smallest I'm likely to find is 10 positions. That's a half inch, plus the extra width of the female part. It's easy to bend those pins and it's not easy to extract the things without the extraction levers which make it even larger. As long as I put it in the back of the card, that's not a big deal, but I'm thinking of putting the connectors on the front to make access easier when pulling a card out of the cage. If power is in the front, it's totally easy. Of course, using an actual back plane is even easier, but that's a lot more work to get all the specs to make that happen. I wish I had one of the card cages in front of me to look at and see how they are constructed.
I have a similar card with a front panel. That is pretty straight forward with 8.5 inches clear space on the front panel. Not sure where to get these particular parts though. I wish the cards were a bit larger in each direction. I can get 8 UUTs on one 6U, size B card, but it will be tight with the other stuff (FPGA, buffers, power supplies). We've always had trouble ejecting the daughter cards as the two friction fit, 20 pin connectors are tough to get apart. It's easy to damage the connectors removing them. I have some ideas, but nothing that's rock solid. It will be important for the test fixture card to be well supported when removing the daughter cards. Having some extra room around the UUTs would help. The next standard size up from the size B (233 x 160 mm) is 367 x 220 mm. That's a large card! Turns out it's not so much money if made at JLCPCB. 20 of them for $352! That's pretty amazing!
There's no real purpose, but it's important to know exactly when the RX interrupt is fired from the UART.
Usually the next transmitter starts transmitting after receiving the last byte of the previous transmitter (for example, the slave starts replying to the master after receiving the complete message from it).
Now I think of the issue related to a transmitter that delays a little to turn around the direction of its transceiver, from TX to RX. Every transmitter on the bus should take into account this delay and avoid starting transmission too soon.
So I usually implement a short delay before starting a new message transmission. If the maximum expected delay of moving the direction from TX to RX is 10us, I could think to use a 10us delay, but this is wrong in your assumption.
If the RX interrupt is at the middle of the stop bit, I should delay the new transmission of 10us + half of bit time. With 9600 this is 52us that is much higher than 10us.
I know the next transmitter should make some processing of the previous received message, prepare and buffer the new message to transmit, so the delay is somewhat automatic, but in many cases I have small 8-bits PICs and full-futured Linux box on the same bus and the Linux could be very fast to start the new transmission.
But this is the goal of *bias* resistors, not termination resistors.
Of course, but termination resistors are usually small (around 100 ohms) because they should match the impedance of the cable. If you want only to introduce "some current" on the bus, you could use resistors in the order of 1k, but this isn't strictly a *termination* resistor.
ST3485 says the input load of the receiver around 24k. When you connect32 slaves, the equivalent resistor would be 750 ohms, that should be enough to have "some current" on the bus. If you add *termination* resistors in the order of 100R on both sides, you could reduce drastically the differential voltage between A and B at idle state.
I think it is extremely rare that this is important. I can't think of a single occasion when I have thought it remotely relevant where in the stop bit the interrupt comes.
No. Usually the next transmitter starts after receiving the last byte, and /then a pause/. There will always be some handling time in software, and may also include an explicit pause. Almost always you will want to do at least a minimum of checking of the incoming data before deciding on the next telegram to be sent out. But if you have very fast handling in relation to the baud rate, you will want an explicit pause too - protocols regularly specify a minimum pause (such as 3.5 character times for Modbus RTU), and you definitely want it to be at least one full character time to ensure no listener gets hopelessly out of sync.
They should, yes. The turnaround delay should be negligible in this day and age - if not, your software design is screwed or you have picked the wrong hardware. (Of course, you don't always get the choice of hardware you want, and programmers are often left to find ways around hardware design flaws.)
Implementing an explicit delay (or being confident that your telegram handling code takes long enough) is a good idea.
I made no such assumptions about timings. The figures I gave were for using a USB 2 based interface on a PC, where the USB polling timer is at8 kHz, or 125 µs. That is half a bit time for 4 Kbaud. (I had doubled the frequency instead of halving it and said the baud had to be above 16 kBaud - that shows it's good to do your own calculations and not trust others blindly!). At 1 MBaud (the suggested rate), the absolute fastest the PC could turn around the bus would be 12 character times - half a stop bit is irrelevant.
If you have a 9600 baud RS-485 receiver and you have a delay of 10 µs between reception of the last bit and the start of transmission of the next message, your code is wrong - by nearly two orders of magnitude. It is that simple.
If we take Modbus RTU as an example, you should be waiting 3.5 * 10 /9600 seconds at a minimum - 3.65 /milli/seconds. If you are concerned about exactly where the receive interrupt comes in the last stop bit, add another half bit time and you get 3.7 ms. The half bit time is negligible.
So put in a delay. An /appropriate/ delay.
Yes - but see below. Bias resistors are part of the termination - it just means that you have terminating resistors to 5V and 0V as well as across the balanced pair.
If you have a cable that is long enough (or speeds fast enough) that it needs to be treated as a transmission line with controlled impedance, then you do need impedance matched terminators to avoid reflections causing trouble. Usually you don't.
A "terminating resistor" is just a "resistor at the terminator" - it does not imply impedance matching, or any other specific purpose. You pick a value (and network) appropriate for the task in hand - maybe you impedance matching, maybe you'd rather have larger values to reduce power consumption.
If you are pushing the limits of a bus, in terms of load, distance, speed, cable characteristics, etc., then you need to do such calculations carefully and be precise in your specification of components, cables, topology, connectors, etc. For many buses in practice, they will work fine using whatever resistor you pull out your box of random parts. For a testbench, you are going to go for something between these extremes.
On some testbenches that we have made that are used for cards with RJ45 sockets on the card, we made posts with an RJ45 on the end, with a small spring at the base. The RJ45 connector had its tag removed, of course. The DUT slid in on rails. For high-usage testbenches, you don't want any flexible cables attached to the DUT - you want bed of nails and spring-loaded connectors as much as possible.
A cable you can make in 5 minutes doesn't cost $1, unless you earn less than a hamburger flipper and the parts are free. The cost of a poor connection when making the cable could be huge in downtime of the testbench. It should not be hard to get a bag of pre-made short Ethernet cables for a couple of dollars per cable - it's probably cheaper to buy an effectively unlimited supply than to buy a good quality crimping tool.
In theory, if all the nodes on the bus were able to change direction in hardware (exactly at the end of the stop bit), you will not be forced to introduce any delay in the transmission.
Many times I'm the author of a custom protocol because some nodes on a shared bus, so I'm not forced to follow any specifications. When I didn't introduce any delay in the transmission, I sometimes faced this issue. In my experience, the bus is heterogeneous enough to have a fast replying slave to a slow master.
Negligible doesn't mean anything. If thre's a poor 8 bit PIC (previous transmitter) clocked at 8MHz that changes direction in TXC interrupt while other interrupts are active, and there's a Cortex-M4 clocked at200MHz (next transmitter), you will encounter this issue.
This is more evident if, as you are saying, the Cortex-M4 is able to start processing the message from the PIC at the midpoint of last stop bit, while the PIC disables its driver at the *end* of the stop bit plus an additional delay caused by interrupts handling.
In this cases the half bit time is not negligible and must be added to the transmission delay.
Not always. If you have only MCUs that are able to control direction in hardware, you don't need any delay before transmission.
Oh yes, if you have already implemented a pause of 3.5 char times, it is ok.
Ok, I thought you were suggesting to add impedance matching (slow) resistors as terminators in any case.
You are making an assumption of implementation. There is a processor in the USB cable that is implementing the UART. The driver enable control is most likely is implemented there. It would be pointless and very subject to failure, to require the main CPU to handle this timing. There's no reason to expect the driver disable to take more than a fraction of a bit time, so the "UART" needs a timing signal to indicate when the stop bit has been completed.
The timing issue is not about loading another character into the transmit FIFO. It's about controlling the driver enable.
Your numbers are only relevant to Modbus. The only requirement is that no two drivers are on the bus at the same time, which requires zero delay from the end of the previous stop bit and the beginning of the next start bit. This is why the timing indication from the UART needs to be the end of the stop bit, not the middle.
You are thinking software, like most people do. The slaves will be in logic, so the UART will have timing information relevant to the end of bits. I don't care how the master does it. The FTDI cable is alleged to "just work". Nonetheless, I will be providing for separate send and receive buses (or call it master/slave buses). Only one slave will be addressed at a time, so no collisions there, and the master can't collide with itself.
How long is a piece of string? By keeping the interconnecting cables short, 4" or so, and a 5 foot cable from the PC, I don't expect problems with reflections. But it is prudent to allow for them anyway. The FTDI RS-422 cable seems to have a terminator on the receiver, but not the driver and no provision to add a terminator to the driver.
Oddly enough, the RS-485 cable has a terminator that can be connected by the user, but that would be running through the cable separately from the transceiver signals, so essentially stubbed! I guess at 1 Mbps, 5 feet is less than the rise time, so not an issue. Since the interconnections between cards will be about five feet as well, it's unlikely to be an issue. The entire network will look like a lumped load, with the propagation time on the order of the rise/fall time. Even adding in a second chassis, makes the round trip twice the typical rise/fall time and unlikely to create any issues.
They sell cables that have 5 m of cable, with a round trip of 30 ns or so. I think that would still not be significant in this application. The driver rise/fall times are 15 ns typ, 25 ns max.
You are not only right, but absolutely correct. Cablestogo has 6 inch cables for $2.99 each. I'd like to keep them a bit shorter, but that's probably not an issue. Under quantity, they even list "unlimited supply".
Communication is about /reliably/ transferring data between devices. Asynchronous serial communication is about doing that despite slight differences in clock rates, differences in synchronisation, differences in startup times, etc. If you don't have idle pauses, you have almost zero chance of staying in sync across the nodes - and no chance at all of recovery when that happens. /Every/ successful serial protocol has pauses between frames - long enough pauses that the idle time could not possibly be part of a normal full speed frame. That does not just apply to UART protocols, or even just to asynchronous protocols. The pause does not have to be as long as 3.5 characters, but you need a pause - just as you need other error recovery handling.
Negligible means of no significance in comparison to the delays you have anyway - either intentional delays in order to separate telegrams and have a reliable communication, or unavoidable delays due to software processing.
No, you won't - not unless you are doing something silly in your timing such as failing to use appropriate pauses or thinking that 10 µs turnarounds are a good idea at 9600 baud. And I did specify picking sensible hardware - 8-bit PICs were are terrible choice 20 years ago for anything involving high speed, and they have not improved. (Again - sometimes you don't have control of the hardware, and sometimes there can be other overriding reasons for picking something. But if your hardware is limited, you have to take that into account.)
Sorry, but I cannot see any situation where that would happen in a well-designed communication system.
Oh, and it is actually essential that the receiver considers the character finished half-way through the stop bit, and not at the end. UART communication is intended to work despite small differences in the baud rate - up to nearly 5% total error. By the time the receiver is half way through the received stop bit, and has identified it is valid, the sender could be finished the stop bit as its clock is almost 5% faster (50% bit time over the full 10 bits). The receiver has to be in the "watch for falling edge of start bit" state at this point, ready for the transmitter to start its next frame.
The "idle" pauses you talk about are accommodated with the start and stop bits in the async protocol. Every character is sent with a start bit which starts the timing. The stop bit is the "fluff" time for the next character to align to the next start bit. There is no need for the bus to be idle in the sense of no data being sent. If an RS-485 or RS-422 bus is biased for undriven times, there is no need for the driver to be on through the full stop bit. Once the stop bit has driven high, it can be disabled, such as in the middle of the bit. The there is a half bit time for timing skew, which amounts to 5%, between any two devices on the bus.
The software on the PC is not managing the bus drivers. So software delays are not relevant to bus control timing.
Yes, why would it not be? This is why there's no need for additional delays or "gaps" in the protocol for an async interface.
It is pointless to add terminator to driver, there will be mismatch anyway and resistor would just waste transmit power. Mismatch at driver does not case trouble as long as ends are properly terminated. And when driver is at the near end and there are no other drivers, then it is enough to put termination only at the far end. So FTDI cable seem to be doing exactly what is needed.
Closer to 50 ns due to lower speed in cable.
Termination is also to kill _multiple_ reflections. In low loss line you can have bunch of reflection creating jitter. When jitter is more than 10% of bit time serial communication tends to have significant number of errors. At 9600 or at 100000 bits/s with short line bit time is long enough that jitter due to reflections in untermined line does not matter. Also multidrop RS-485 is far from low loss, each extra drop weakens signal, so reflections die faster than in quality point-to-point line.
I worked on highway traffic sign project some years back that used multidrop RS423. The sign driven from a roadside controller, with a supervisory controller between that and led column controllers. Supervisory controller always master, with col controllers slaves. Master always initiated comms, with col controllers talking when addressed. A simple software state machine and line turnaround for the selected column to talk. Used diff line transceivers at the tx and rx ends, which could be tristated at the output. Interesting project and with a 15 yr design life, probably hundreds still working now. RS423 multidrop works well, though don't remember what the max supported speeds are. Much cheaper than network, but you can used standard cat5 etc network cables and pcb sockets to tie it all together...
Yes, that's true for a single driver and multiple receivers. The point is that with multiple drivers, a terminator is needed at both ends of the cable. You have two ends to terminate, because drivers can be in the middle. You could not use FTDI RS-422 cables in the arrangement I am implementing. Every receiver would add a 120 ohm load to the line. Good thing I only need one!
How do RS-485 drops "weaken" the signal? The load of an RS-485 device is very slight. The same result will happen with multiple receivers on RS-422.
I expect to be running at least 1 Mbps, possibly as high as 3 Mbps.
One thing I'm a bit confused about, is the wiring of the EIA/TIA 568B or 568A cables. Both standards are used, but as far as I can tell, the only difference is the colors! The green and orange twisted pairs are reversed on both ends, making the cables electrically identical, other than the colors used for a given pair. The only difference is, the different pairs have different twist pitch, to help reduce crosstalk. But the numbers are not specified in the spec, so I don't see how this could matter.
Why would the color be an issue, to the point of creating two different specs???
Obviously I'm missing something. I will need to check a cable before I design the boards, lol.
RS-485 will require you to make a firm decision on protocol timing. Either you require that ALL units can get off the line fast after a message, so you don't need to add much wait time, or your allow any unit to be slow to get off, so everyone has to wait a while before talking.
Perhaps if you have a single master that is fast, the replying machines can be slow, as long as the master knows that.
Multi-drop RS-422, with one pair going out from the master controller to everyone, and a shared pair to answer on largely gets around this problem, as the replying units just need to be fast enough getting off the line so they are off before the controller sends enough of a message that someone else might decide to start to reply. This sounds like what you are talking about, and does work.
You can even do "Multi-Master" with this topology, if you give the masters two drive chips, one to drive the master bus when they are the master, and one to drive the response bus when they are selected as a slave, and some protocol to pass mastering around and some recovery method to handle the case where the master role gets lost.
One other thing to remember is that 422/485 really is designed to be a single linear bus, without significant branches, with end of bus termination. You can "cheat" on this if your speed is on the slow side.
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.