MODBUS RTU specifies 3.5 character times "settle time" after the last byte was sent. That's on the order of .9 msec for 38,400 baud.
But that's a *minimum*.
You have to know the worst-case timing. If you can't control the slave nodes, then you'll have retransmissions due to collisions if the ISR is sufficiently nondeterministic.
Since it's half-duplex, I'm wondering why you get an interrupt other than TX-complete in that state.
You simply have to trade speed for determinism in this case. And there will be collisions. If nothing else, add counters of bad CRC events and no-response events and tune a delay.
485 isn't a good protocol these days. The kids get Ethernet on RasPi class machines for class projects; it's not unreasonable to use a real comms stack.
You must be quite desperate, if you intend to use 1x550 style chips on RS-485 :-). That chip family is useless for any high speed half duplex communication.
You can get an interrupt when you load the last character into the Tx shift register, but you can't get an interrupt, when the last bit of the last character is actually shifted out of the Tx shift register.
In real word, some high priority code will have to poll, when the last bit of your transmission has actually sent the last stop bit of your last byte into the line.
And (in my experience) figuring out when that stop bit has been sent can be problematic. Not all '550 "compatible" UARTs wait until the end of the stop bit to set the "transmit shift register empty" status bit. Some I've used set it as soon as the last data bit has been sent, and if you turn of the driver at that point, you canse lose the stop bit and create a framing error.
--
Grant Edwards grant.b.edwards Yow! My polyvinyl cowboy
at wallet was made in Hong
Ethernet has a minimum time between packets that is independent of most other timing parameters. The suggestion for it is that it is enough time for the receiver to get the data out of its buffer, and get ready to receive again.
For the usual asynchronous serial systems, the stop bit is the same level as the inactive state of the line. As long as you don't start the next character too soon, you are safe.
If you are using it for a multiple driver line, seems to me that you are using it for something that it wasn't designed to do.
That's true, but I don't know why it's relevent. We're talking about knowing when to turn off RTS at the end of the last byte in the message. If you turn it off immediately after the last data bit, and the level of the last data bit is opposite from the required stop-bit (idle) state, then you end up with problems unless the line is biased stringly enough to return the line to it's idle state in less than about 1/8 of a bit time. In my experience, a lot of installations end up with no bias resistors at all...
There is no next character. We're talking about the last byte in a message.
That's exactly what RS485 is designed to do, but for it to work reliably, you have to leave RTS on during a good portion of the stop bit so that the driver can actively force the line back to the idle state.
--
Grant Edwards grant.b.edwards Yow! ... the MYSTERIANS are
at in here with my CORDUROY
Unless your processor is very primitive, you should be able to make the serial interrupt the highest priority. Or take David Brown's suggestion and disable all but the serial interrupt when you start to transmit the last byte.
I believe you're going about the last half of this backwards. Do not calculate the worst-case interrupt latency -- specify it, and make it a requirement on the slave boards. This should be easy enough to do if you are in charge of all the software, and still quite doable if you're only in charge of the communications software (assuming a functional group).
In a UART without a FIFO, an easy way to do this would be to send one or more bytes with the transmitter disabled, then turn on the transmitter at the appropriate time. Basically, use the UART as your timed event generator.
In my experience, unless you're really using a high baud rate and a slow processor, or if your ISR's are just plain incorrectly written, your interrupt latency will be far lower than a bit interval.
Device sat in a hard-to-access location and had pretty extreme environmental conditions (heat, vibration, etc.). The traditional T's (or F's if you preferred that orientation) just weren't very good at long term reliability. So, the physical connections were "adjusted" to more appropriately address those needs.
I'd assume you could still hack together a suitable PHY. (?) Not sure how *economical* it would be, though...
Indeed my approach is to use a volatile uint16_t ticks variable incremented every 1ms in a timer ISR. Of course the variable overflow "naturally" from 65535 to 0 in the ISR. Taking into account the wrap-around, I can manage delays up to 65536/2=30 seconds that is enough for many applications. When I need longer delays, I use uint32_t. Consider that my ticks variable is a "software counter", not a hardware counter (that is used to generate 1ms timer interrupts).
On 8-bitters, I read the ticks variable after disabling interrupts, just to be sure the operation is atomic. The Wouter's approach is new for me and very interested. I'll try to use it in the future.
I think It can be used to read two 8-bits registers or two 16-bits registers (if the architecture lets to read atomically a 16-bit hardware counter).
The initial Wouter's approach doesn't take into account wrap-around, because he uses a very wide 64-bits counter that reasonably never reach its maximum value during the lifetime of the gadget (or the developer's life). For 16-bits or 32-bits counters and 7d/7d 24h/24h applications, the wrap-around *must* be considered, so reducing the maximum delay to a half. Anywat this is a big issue.
The only problem I see with using hardware counter is that it is quite impossible to have a nice counting frequency, such as 1ns, 1us or 1ms. Mostly hardware timer/counter peripherals can be feed directly by the main clock or after a prescaler. Usually prescaler values can be 2, 4,
8, 256 or similar, that brings to an odd final frequency. With a "software" ticks counter incremented in a timer ISR, it's simpler to calibrate the hardware counter to trigger every 1ms or similar nice values.
If you *know* you will always look at a value "more often" than the wraparound period, you can always *deduce* wraparound trivially:
unsigned now, then;
if (now < then) now += counter_modulus;
(effectively)
Anything you can do to AVOID disabling interrupts (or, to allow you to re-enable them earlier) tends to be a win.
Ideally, you don't ever want to unilaterally disable (and, later, re-enable!) interrupts. Instead, each time you explicitly disable interrupts you want to, first, make note of whether or not they were enabled at the time (assuming this isn't implied).
Then, later, when you choose to re-enable them, you actually want to RESTORE them to the state that they were in when you decided they should be disabled.
With careful consideration, you can read any width counter/timer (though the granularity of your result will vary).
[keeping in mind that IRQ's can come into this at any time -- including REPEATEDLY!]
If you had a proper RTOS, you could ask the OS to schedule a task at some specific interval after the "even of interest". It would then GUARANTEE that at least N time units had elapsed (and not more than M).
Doesn't matter. Do the math ahead of time (e.g., at compile time) and figure out what (value) you want to wait for.
Timer IRQ's (esp the jiffy) are a notorious source of problems. Too *often*, too *much* is done, there. (e.g., reschedule())
It's harder -- but not discouragingly so -- to move stuff out of the jiffy. But, once you do so, you tend to get a lot more robust/responsive system.
E.g., the "beacon" scheme I mentioned (elsewhere) allows you to pre-determine what your actions will be... then, lay them in place when the "event of interest" occurs in a very timely manner -- without doing any "work" in IRQ's, etc. You've already sorted out what *will* be done and are now just waiting for your "cue" to do so!
For example, if you know the beacon message will be N time units (based on number of characters and bit rate), you can concentrate on detecting the beacon -- and nothing more -- PROMPTLY. Then, arranging for your code to run N+epsilon time units after that event (instead of trying to watch each byte from that beacon message in the hope of finding the end of the message).
This sort of scheme can easily allow every node (in a modest cluster size) to indicate that it needs attention (by allowing each node to respond to a "polling broadcast" in their individual timeslots with an indication of whether or not they "have something to say". (the master node then takes note of each of these and, later, issues directed queries to those nodes that "need attention")
If it hasn't been said (and, if your environment can accommodate it), you might want to look at a different signalling/comms technology that allows for a true party-line (resolving contention in hardware).
I usually use the following comparison to understand if a timer tmr has expired.
((uint32_t)(ticks - tmr) > very
Oh yes, I know.
In my applications, I disable interrupts only when managing timers, so I'm sure interrupts are enabled when I try to access the 16-bits or
32-bits counter. Of course, I never use timers in ISRs.
What do you mean with "jiffy"? Are you naming my approach as "jiffy"? I didn't understand.
I'm sorry, I think I completelty didn't understand what you have written :-( I don't hope you explain again in greater details what you have written, have you a link to study this "beacon" approach?
I have just one checksum at the end of the message, but the address field is at the beginning. Anyway I look at the address field as it arrives to decide if it's a frame for me.
I know the address field could be corrupted, but IMHO it's not important. If the address field is corrupted and appear for me, but the master wanted to talk to another node, I store the message till the end, but the checksum will be wrong, so the frame will be discarded. If the address field is corrupted and it doesn't appear for me, but the master really wanted to talk with me, I discard early the message.
IMHO, adding a new checksum at the beginning of the frame, only to protect the address field, doesn't add more robustness to the final performance.
I think I'll have to think more deeply about this beacon approach. Any useful info on the Internet?
I'm using AVR8 from Atmel controllers. I can't change interrupt priorities (they are hard-wired in the device). IMHO, anyway it's not a matter of priority, but of the lack of a *nested* interrupt controller: an ISR can't be never interrupted by higher priority interrupts (in this case, transmit complete).
This is a good suggestion, even if it isn't simple. I should save the status of all IRQ, disable them and reactive the ones originally active.
Note that "Wouter's algorithm" (giving him his two minutes of fame, until someone points out that he didn't actually invent it...) is easily extendible. On an 8-bit system with 64-bit counters, the "read_high" should read the upper 56 bits, and the "read_low" reads the low 8-bit (or use a 48-bit/16-bit split if you can do an atomic 16-bit read of the counter hardware, which IIRC is possible on an AVR).
Another variation that might be easier if your counter is running relatively slowly (say 10 kHz) is just:
a = read_counter(); while (true) { b = read_counter(); if (a == b) return a; a = b; }
(Yes, the run-time here is theoretically unbounded - but if your system is so badly overloaded with ISR's that this loop runs more than a couple of times, you've got big problems anyway.)
Wrap most be considered, but it is not necessarily a problem. Just make sure you deal with differences in times rather than waiting for the timer to pass a certain value.
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.