It's called a checksum and can be done after the entire message is received and before a reply needs to be sent.
As the OP has pointed out this is of no value whatsoever. No one is trying to minimize dead time or maximize throughput. The question is about turning off the driver at the end of a transmission.
This is not only not a solution to the problem, it is entirely pointless. The protocol is that the master sends a message to a peripheral device. When the peripheral device receives a message it responds. When the master receives a response it can send the next message.
Or the master can wait to send another message until the reply from the peripheral is complete.
Actually this is poorly thought out. The problem is not with the protocol. The problem is a basic hardware limitation which makes it hard to control the driver enable at the time it is needed. Either you have no understanding of the problem or you just couldn't be bothered to actually respond about the problem at hand.
You haven't indicated how big the payload is. Or, the effort required to create the reply.
In your scheme, you MUST deliver a reply as soon as the message from the master is complete (you are free to define "soon").
Verifying the address (as yours AND "intact") lets you know whether or not you have to deal with the payload AT ALL.
Probably. But, I've not gone looking for it. You might look at some of the token passing algorithms to see (Arcnet?) what they have to say by way of example.
The key advantage is that it lets you set up the time at which your reply will be "required" instead of having to "watch carefully" for the end of the immediately preceding transmission (from the master). This lets you deal with bigger time intervals instead of bit-level timing. I.e., recognize the start of the beacon -- because that is usually a more repeatable "event" -- (and it's content) and then IGNORE the serial line until you know it is your timeslot to reply (you don't care what the data on the bus happens to be from the other nodes -- why even receive it?).
[Unless you also want to use this scheme to allow the other nodes to be bus masters and directly deliver their messages to other nodes, instead of via the master. I.e., in time slot N, node N can send a message to ANY node (which may be a REPLY to a message sent from that other node, earlier). The master's significance is just to coordinate the timeslots. Note this is far more involved than anything discussed so far]
Note that you also know when your timeslot *ends* and the next begins. So, you can turn on the driver for the entire duration REGARDLESS of how long your message is -- subject to the constraint that you know it must fit in the time allotted. And, even before you are ready to transmit!
If you are sluggish getting onto the bus... No problem. As long as you hit *your* timeslot. I.e., you can be sluggish if you know your reply is short enough to still get out in the time remaining.
In other words, you can trade processing time for response time. If you have to digest a complex message from the master, just make sure your expected reply is SHORT: ACK vs. NAK instead of detailed.
If you need to deliver a detailed reply, then modify your protocol so that you can get some forewarning of its impending need. E.g., a message that effectively says, "prepare the detailed reply for me" to which you acknowledge (or NOT!) your understanding of the request AND your ability to deliver it (which might require scheduling resources that aren't normally available for that). Then, later, expect to get a message asking for the actual details (that you have presumably "staged" in anticipation of this).
[You can also modify the protocol so the master does this in three steps: "Prepare the details" "ACK/NAK" "Are you ready yet?" "ACK/NAK" "OK, give them to me!"]
It's just another way to get stuff out of ISR's and into lower priority processing.
It also lets you modify the protocol to give everyone a slot (if you have a small number of nodes) instead of having the master constantly prompting everyone (which wastes bandwidth as well as increasing the number of instances where the bus has to be handed off "per unit message exchange")
Finally, it allows the master to routinely distribute information that may be pertinent to all nodes (e.g., current system time).
I have no idea what your actual communications needs are (you haven't indicated an application, etc.). I offer it as an alternative approach to consider instead of the more obvious "ping-pong" of command-response that you appear to be pursuing. Only you can tell if it has merit in your case. If messages pass between nodes (via the master?) infrequently, then this wastes a lot of time on the bus -- because most timeslots will contain no information (though the corresponding node can be REQUIRED to deliver an acknowledgement if it is important for you to use that as a keepalive/verification that the node is still "up").
Of course, if the master is cyclicly *polling* each of these nodes, then even MORE time is wasted (the time for the individual solicitations and the negative acknowledgements).
I use a similar scheme with Ethernet to manage multiple (~120) co-operating hosts and ensure all have "system data" that needs to be delivered periodically (the beacon is a "periodic token") as well as ensuring that the required nodes are "up".
[Of course, my nodes can all chat as necessary so it isn't used to arbitrate access to the medium]
Assuming the counter counts UP, the only way you can see a SMALLER value than the last one you saw was if the counter wrapped in the time between those two observations. I.e., the "real" value of the counter NOW is actually counter_modulus greater than the OBSERVED value.
If you want to know how much time has transpired, it is: now + (counter_modulus - then) i.e., the time from "then" to the counter wrapping PLUS the time from the counter wrapping ("0") to "now".
The "periodic interrupt" common in most systems is the "jiffy". There are probably better references (I'm pressed for time)
Ah, well... Ignore my previous reply!
Off to another pro-bono day. (Boy do I hate mornings!)
That is a very dangerous way to control the driver enable. Most likely the interrupt will happen when the start bit is sent which means the output will already be sending the start bit when the driver is disabled.
I think you are worrying about the wrong end of the message.
Really? "MUST"??? All the respondent is required to do is respond before the master times out thinking the respondent is not replying. This timeout should be set as required by system level requirements such as throughput, etc.
Don't make this as complex as the things you design.
The "beacon" is pointless, don't waste your time with it. It has no advantage over a simple poll/response protocol and in fact is just the same thing with more complication and no added benefit.
One point you should be aware of is that your start and end characters can be compromised and the protocol has to deal with that. So consider what happens when they are munged and not recognized. Make sure your protocol is robust to those problems.
Instead of calculating, you can also build the design, and then measure how long it takes and adjust the timeout value until error rate is sufficiently low. I'm assuming that error rate > 0 is acceptable.
Use a hardware timer, but it doesn't have to be just for this purpose. Often, you can still use match interrupts on a free running timer, you just have to adjust the match registers after each interrupt. I do this all the time.
Keep in mind that the receiver also needs to process the incoming packet, verify checksums, and prepare a response. You can use the dead time for that.
You're stuck with this problem because you are forcing processing and timeliness constraints into the ISR's. An ISR should *only* do what it ABSOLUTELY MUST. "In and out", lickity split!
Your description SUGGESTS that you are implementing your comms system in a state machine something at the RxIRQ level like (pseudocode):
GetCharacter: retrieve character from comms hardware note error flags associated with this reception if any error, do some sort of error recovery/logging (or, do state specific error recovery, as appropriate) ret
AwaitSoH: header = GetCharacter() if (header != Start_of_Header) diagnostic("SoH not received when anticipated") // leave RxIRQ as is; remain in the state awaiting SoH else set_RxIRQ(SoHReceived) return from interrupt
// the above assumes SoH doesn't occur in a message body. If it // does, then you revisit this state occasionally as you sync up to // the data stream
SoHReceived: address = GetCharacter() if (address != MyAddress) diagnostic("message APPARENTLY not intended for me") set_RxIRQ(AwaitSoH) // a simplification, for illustration else message_length = 0 // prepare for payload to follow InitializeChecksum(address) set_RxIRQ(AccumulateMessage) return from interrupt
AccumulateMessage: datum = GetCharacter() buffer[message_length++] = datum UpdateChecksum(datum) if (message_length > MESSAGE_LENGTH) set_RxIRQ(AwaitChecksum) return from interrupt
AwaitChecksum: checksum = GetCharacter() if (Checksum != computedChecksum) diagnostic("message failed integrity check") error else parse message (unless you've been doing this incrementally) act on message prepare result wait until master has turned off its bus driver turn on your bus driver and transmitter (or, have the scheduler do so) schedule your reply for transmission set_RxIRQ(AwaitSoH) return from interrupt
[Note that you may, instead, have folded all of this into one static RxIRQ by conditionally examining a "state variable" (byte counter?) and acting accordingly: if (byte_count == 1) check if correct header else if (byte_count == 2) check if correct address else if (byte_count ... I manipulate the interrupt vector instead as each little ISR "knows" what the next ISR should be so why introduce a bunch of conditionals?]
And, your TxIRQ (once the reply has been scheduled):
TxIRQ: SendCharacter(buffer[message_length++]) if (message_length > MESSAGE_LENGTH) set_TxIRQ(ShutdownTx) return from interrupt
ShutdownTx: wait until last character cleared transmitter (NOT holding reg)! wait until line has stabilized (bus driver) turn off bus driver set_TxIRQ(none) return from interrupt
[Of course, message lengths can vary between Tx and Rx, etc]
So, all of your IRQ's are lightweight. EXCEPT the "AwaitChecksum" state. There, you have to do a fair bit of processing ON THE HEELS OF the final character in the master's transmission (you could add an additional "trailer" but that's just more characters to receive and process and doesn't fundamentally change the algorithm).
The delays ("wait until...") are all relatively short. Yet, not necessarily "opcode timing" short. So, you sort of have to sit around twiddling your thumbs until you think you've met the required delays (you can't make any *observations* to tell you when the time is right... when the master has turned off its bus driver, etc.)
Or, throw some hardware resource at the problem (small interval timing)
All of that waiting wants to be done OUTSIDE the ISR. Yet, you can't afford for it to be unbounded -- because your master no doubt expects a reply in *some* time period else a dropped message would hang all comms! (and it probably uses a lack of reply to indicate a failure of your node)
I.e., there are LOWER and UPPER bounds on when you can start your reply. Too soon and you collide with the the tail end of the master's transmission; too late and you risk the end of your reply running into the master's
[Or, you can add a timer to the master so that it doesn't start its next transmission until it is *sure* you are finished transmitting]
Likewise, all of the *processing* that isn't time critical (or, SHOULDN'T be!) wants to happen outside the ISR.
If, instead, you could note the time at which a "request" from the master was sent and use that as a reference point from which to determine when you would have a CHANCE to deliver a reply ("timeslot"), then you can do all of this processing AND waiting outside of the IRQ.
[If you force that time to be JUST the duration of the master's message, then you don't have any real leeway -- you have to act promptly! You're stuck with your present dilemma.]
For example, assume you have a 1ms periodic interrupt (change to suit your needs). Assume you are delivering data at 9600 baud (change to suit your needs). Assume messages from the master are M characters long.
[I've chosen numbers that make the math relatively easy so you don't have to dig out a calculator. Changing the values just changes the math. I.e., characters are arriving roughly at the same rate as your periodic interrupt -- though they aren't guaranteed to be in a particular phase relationship with it. (this is not a requirement of this approach, just a coincidence for the numbers I have chosen)]
On a particular node, you notice a "SoH" received sometime between periodic interrupt S-1 and S (because you modify your AwaitingSoH ISR to signal an event that you can then examine -- or, let it capture the "periodic counter" *in* the ISR and post that time value as the "event").
You KNOW the master's message will not be complete until at least (S-1)+M but definitely before S+M -- because you have *designed* to this goal!
Furthermore, you know that your timeslot is offset X ms from the StartOfHeader in the master's message (i.e., time ~S). You KNOW that you can't safely turn on your bus driver until S+X (because those are the rules of the protocol) but you *do* know that the master will have turned his bus driver off by then (because it is following the same rules!)
So, you schedule a job that turns on your bus driver at S+X and pushes *a* reply onto the bus.
This need not be *the* reply to the message from the master sent at time S! It may, instead, be a reply to the message sent by the master at the *previous* "time S". Or, the one before that!
Instead of tail-gating the master's message and trying to reply as soon as the last character in its message has cleared the medium (or, some epsilon later), you decouple your replies from the master's requests.
With the timeslot, you *know* you can send a reply EVEN IF THE MESSAGE FROM THE MASTER IS NOT FOR YOU! I.e., you dont have to check the address, decode the message, act on it AND compose your reply *now*. Likewise, other nodes know that they can send THEIR replies even when the current message is for *you* and not them!
You just support some number of outstanding messages to each node (perhaps just "1" but bigger numbers are better) and tag them with a (small) sequence number -- so the master can pair replies to outstanding requests AND so you can see when the master has given up on an "old" request that you perhaps forgot to acknowledge.
[E.g., you can conceivably reply to message 2 before replying to message
1 -- if that makes sense in your current execution environment. If not, then Reply2 has to wait to be scheduled until Reply1 has been sent. You are free to arrange those criteria as fits your processing capabilities. You don't HAVE TO reply to the message *now*.]
Note that this can be scaled by supporting a smaller number of timeslots than there are physical nodes in the system -- the master can allow nodes (1 - Q) to use the Q timeslots following
*this* message (before it issues the NEXT message) and the (Q+1 - 2Q) nodes to use the Q timeslots following the NEXT message.
The point is, each node knows AHEAD OF TIME when it can reply (when it can turn on its bus driver) instead of having "very little notice" and having to react promptly.
It also allows the nodes to know that communications are "fair" and "deterministic". Any node knows how long it must wait before it is *guaranteed* a chance to access the medium. (if there was no guarantee, then nodes wouldn't have been able to PREDICT when they should acquire the medium and place their messages on it)
You've moved the: parse message (unless you've been doing this incrementally) act on message prepare result steps from the ISR into a lower priority task where you, presumably, have more leeway in addressing those needs (than you would in an ISR that wants to be short!). *All* of your ISRs are now short (because they just empty the receiver or stuff the transmitter and don't do any *decoding*, processing, etc.)
It can often work fine - at worst, the receiver sees a brief noise at the end of the telegram itself, and that is easily ignored.
However, if you are worried that the "transmission complete" interrupt might be delayed too long, then clearly the same thing will apply to the final "transmit character" interrupt for your extra character. So it is a useful trick if you don't have a "transmission complete" interrupt signal, but not for the problem at hand here.
An other way of dealing with '550 style UARTs on RS-485 is to use a driver chip that doesn't disable the receiver during your own transmit. Thus the UART Rx pin will hear your transmission and the Rx ISR can do "echo canceling" by monitoring your own transmission. As soon as the Rx interrupt hears your complete final transmitted byte, the Rx ISR can turn off the transmit enable (RTS) and change the Rx mode from echo canceling to normal Rx mode.
This makes it possible to do all things in the ISR and you don't have to do anything in normal code (with potentially bad latencies in some OS).
No, my point was that Dave's suggestion of sending an extra byte is not "very dangerous" as you suggest, and can be a useful trick. It is not needed here as the OP has a "transmission complete" interrupt which triggers when the final send is complete. Many other microcontrollers and UARTs don't have a suitable interrupt on the final character (or have flaws with it, such as an interrupt that triggers at the start of the stop bit - turning off the driver at that point can cause hard-to-trace problems). For such micros, sending an extra byte can be a good solution.
But as far as I can remember (it's a long time since I used an AVR), the AVR's "transmission complete" interrupt works fine.
Please tell us, how to get transmitter [shift register] empty interrupt on the '550 style UARTs ?
You can get last byte loaded into the transmit SR interrupt, but you need to poll some status bits to know, when the last byte has been shifted out of the SR.
There has been claims that dropping the SR empty status at the end of last bit but before the stop bit(s) might be a problem, However, with standard "fail-safe" termination, the line is in the idle state during the stop bit times anyway.
Turning the transmitter a few bit times too late is usually not a problem, except for extremely low line rates. A character delay can be catastrophic, if the other station responds rapidly.
In practice (I have seen several examples) is the stupid implementation in many devices, in which the designer relies on the "last byte moved to the SR" interrupt and prematurely turns off the transmitter and the line floats to the idle "1" state. Since the UART sends LSB bits first, the premature Tx disable will set the MSB bit(s) to 1, so 0x0? is received as 0x8?, 0xC?, 0xE?, 0xF? of even as 0xFF.
Since many protocols put BCC/CRC into the last byte, the received and calculated value differ by those high bits, which is a clear sign of premature Tx disabling.
This problem becomes worse as the line speed is dropped. Some devices can't be used below 9600 bit/s rates due to this problem.
We're talking about a 16550. There _is_no_ transmitter empty interrupt. There is a transmit holding register empty interrupt, but that happens _before_ transmission of the last byte has begun.
There is a transmit shift register empty status bit (no interrupt). In my experience that status bit isn't reliable either and on some implementations goes active before the final stop bit has been sent.
Grant Edwards grant.b.edwards Yow! Used staples are good
at with SOY SAUCE!
I don't know who "we" is, but the OP never said what UART he is using. I had the impression it was a UART within an MCU from his initial post where he refers to "some microcontrollers" toggling an output. Did he say he is using a '550' type UART?
I just don't know that putting the glitch on the bus is a good idea. Minimizing the glitch depends on a fast response to the interrupt which is most of what this thread has been discussing. A slow response puts a larger glitch on the bus.
Personally I prefer to use hardware which is designed for the job and will handle the driver enable properly.
I was thinking about your suggestions, but it seems to me it doesn't work well.
Others suggested to send some dummy bytes, keeping the RS485 driver
*disabled*. Only after those bytes are really shifted out, the driver could be enabled. In other words, you use the UART hardware also for timing, without the need to use other peripherals (such as timers/counters).
Your suggestion seems better: send some sync bytes at the beginning, with the RS485 driver *enabled*. My protocol can be used with this approach, because it is inspired to HDLC. Every frame starts and ends with a sync byte. If the sync byte appears in the payload data, it is escaped (as in HDLC or SLIP protocols).
The device could send N sync bytes without problems. The receiver will see N empty frames and will discard them silently. In this way it's even more simple to introduce a delay before the answer.
But I think it doesn't work. Bytes are send asyncronously and the receiver must syncronize to the start bit of each byte. If the slave sends 2 sync bytes in the front of each frame, without a delay, and the master toggles direction in the middle of the two sync bytes, the master will receive one or two wrong bytes or detect frame errors, depending on the precise time and transitions pattern.
Moreover, if the payload is transmitted immediately after sync bytes, as really happens, the overall frame could be corrupted.
*Perhaps* this technique works well only if the preamble bytes are 0xFF, because they appears on the wire just as a single start bit (all the other bits, including stop bits, are at the idle level).
--- ------------------ ------------------ -......... | | PREAMBLE | | PREAMBLE | | START OF FRAME | | 0xFF | | 0xFF | | ---- ---- ---- ^ ^ ^ A B C
If the master toggles direction at time A, it will receive the two preamble bytes correctly and could discard them. The frame is received correctly. If it toggles direction at time C, it will receive only one preamble byte correctly and could discard it. The frame is received correctly.
What happens it the master toggles direction at time B, in the middle of a start bit? I don't know if the UART detects a start bit on the high-to-low edge or on the low level. In the first case, I think there's no problem. In the second case, what happens?
It does - I have been using it for years (20+) in several different RS-485 links.
You need also a reliable way of recognizing frame boundaries, to get the framing right. I'm using an encapsulation similar to PPP (RFC1662) which also gives the possibility to exclude the preamble bytes from valid frame data.