It's pretty straightforward if you can use a timer capture pin; it takes care of grabbing the timer count on the active edge and you can examine it at leisure. Essentially, you will want to listen to an incoming serial stream to measure the narrowest pulse and when you've seen enough identical pulses, that determines the rate. If you get a narrower pulse (that is wider than a glitch width), start the count over.
A little more robust is to add N to an accumulator when you see a pulse within epsilon of the current narrow width, subtract M if there is a wider pulse, or reset to zero and start over if there is a narrower. When the accumulator reaches Q, you've found the pulse width. If it counts down to zero (from too many wider pulses), assume you saw a narrow noise pulse and start over. A divide-by-16 (>>4) works pretty well for epsilon.
The problem is that if a rogue pulse is detected then you could end up waiting forever for the confirming pulses. Say you expect baud rates from 4800 to 115200, corresponding to pulses from 2400 to 100 ticks wide (picking an artificial clock rate) and selected a minimum of 80 ticks (where 80 or less is ignored as a glitch).
Say the actual baud rate was 4800 but before you sync'd to it a noise pulse from, say, inserting the connector was detected that was 1200 ticks wide. You'd wait forever watching the 4800 baud pulses (2400 wide) and never see another, so there has to be some mechanism to abandon the current minimum and start the search anew.
One way to avoid the trap is to add, say, 10 to an accumulator for every pulse matching the currently observed minimum and declaring a valid minimum if, say, a count of 50 is reached. Deduct 1 from the accumulator whenever a pulse wider than the current minimum is seen and start the search over if the count ever gets back to zero (or if a narrower pulse is detected, of course).
AIUI, ATmega8 only allows for input capture to happen on an "event" occuring either on analog comparator output, or a specific ICP1 pin. Unfortunately, I'd need the latter for GPIO. (Naturally, ICP1 is distinct from UART's RxD.)
I gave up several times on a similar project when I couldn't determine the characteristics of the data stream...or even if there was one.
Been thinking about this for an hour or so.
If you take a random pulse width measurement. Save that as mmpw, minimum measured pulse width. Have a table of pulse widths for the allowable baud rates. Scan up the table till you find a number match. IF not, multiply the table by 2 then three then four... Eventually you should find a match. Program that into the USART. Have the character input interrupt start looking for framing errors.
Take another random pulse width measurement. If it's smaller than mmpw, plug it into mmpw and restart. If it's larger, subtract the two numbers. The difference should be an integral multiple of the REAL pulse width for the actual baud rate. Divide the numbers. If the remainder isn't zero, mmpw isn't the correct bit time. I think the correct bit time is an integer multiple of the remainder, but I haven't got my head fully around that one. Use those numbers to scan for a higher baud rate and an inferred new mmpw. Quit when you get tired of looking for narrower mmpw or those that aren't an integral multiple of the trial bit time and aren't getting framing errors.
Of course, there are all kinds of issues with timing synchronization, error bounds, glitches etc. that need to be dealt with. And the simplest way of thinking about it seems to result in a recursive algorithm which isn't nice on a simple processor...so maybe the details kill the concept...but it would be interesting to play with.
I like it because I think it converges even if you never catch an actual one-bit-wide pulse.
For any given algorithm [*] it's probably possible to construct a data stream that can fool it. I have found the "accumulator with decrement" method to be reliable and simple [**] to sync with ASCII (NMEA 0183) data that's always at 4800 baud except when it isn't. It certainly helps that on common 8N1 lines, ASCII's 0 MSB precedes the 1 of the stop bit and that is followed by a 0 for the next start bit.
It is a good idea to check the discovered pulse width against expected values for legal (for the application) baud rates and then take appropriate action if there isn't a match.
[*] The exception may be the "Hit the key until the terminal replies with OK" method. Even then it might be possible to find bogus inputs that "work" at the wrong baud rate for some keys.
[**] Always a plus when coming back do maintenance on the code several years later.
If you've got lots of time and continuous input, you can try the timing approach, then try to verify your estimate by looking for known patterns in the data. If you can't find the pattern (or correct checksum, etc.), start over.
If the data has known patterns, and you can assume a limited set of standard baud rates, you can simply look for a known character stream in the data. I once used an instrument which would accept any standard baud rate. The manual said "when you power up the instrument, press the space bar repeatedly until the instrument responds with the startup message. The instrument simply stepped through baud rates with each incoming character until it found a space character.
You could apply the same approach, with a more complex algorithm, to any source that provides a data stream with known characteristics.
Since you are starting out on a new project why use an obsolete processor ? You can buy an LPC1113 (or some one else's ARM Cortex M0) for less money than an ATmega8 and it has much better timers with input capture. It also has a good 10x the performance !
(I vaguely recall that there already was a discussion on this.)
Apart from lacking any experience whatsoever with ARM, I'm also somewhat concerned about the ARM's /generally/ being unavailable in TQFP, SO or DIP packages, which may be important should I'll be making a "kit" of the design I'm currently working on.
Besides, where can I find an ARM for $0.9 (or less; including shipping)? (For the quantity, I'd readily buy 10 or so.)
When using capture pins or using simple bit banging, if there are something known about the signal source, some heuristics can be used to reduce the possible combinations.
In asynchronous communication, the line is in the "idle" state (Mark, logical 1) 20 mA in current lop, -12 V in RS-232 and so on. The "fail-safe" termination will pull the line to idle state in RS-485. Each character starts by the start bit (Space, logical 0, 0 mA, +12 V..) ), following by a number (usually 5-14) of data bits, followed by an optional parity bit, followed by 1, 1.5 or 2 stop bits (logical 1).
After the last stop bit, if there are no more characters to send, the idle period (logical 1) starts. The line remains in the idle state for an arbitrary time, unless there is more characters to send. While this time can be absolutely anything, typical UARTs start transmitting at some clocked intervals, typically 1/16 multiples of the bit clock period.
If there is a lot of characters in to be transmitted, immediately after the last stop bit ("1"), the start bit ("0") will be immediately sent.
However, in quite a few situations, the line remains longer or shorter periods in the idle "1" state, thus detecting the if´dle state will help in detecting the start bit "0" after the idle period and hence help figuring out the timing.
For instance, if the signal source is a keyboard operated by a human, even with autorepeat and line speeds above 300 bits/s, there are quite long idle periods between characters.
Even with half duplex protocols (request/response) even with full duplex capable hardware, there are some idle periods between master requests and slave responses (especially Modbus requires a 3.5 character idle period between request and response). If you are only listening to master requests or only slave responses, there are quite long idle periods between two requests or two responses.
Once you have identified the idle state, wait for the start bit "1"->"0" transition. Most UARTs use some kind of false start bit detection to see, if the line is still in the "0" state, by sampling at the middle of start bit of use three 1/16 bit time samples and majority voting. Of course, this assumes that the bit rate is known.
Since the bit rate is not known, check for each possible half bit length point for various possible standard bit rates, is the line still in "0" state and continue validation. However, if you get a "1" state at some of the sampling points, it can be assumed, that the original falling edge was not a true start bit and discard it or that the data rate is higher than you expected. Search for other alternatives.
After so many years with that "stupid vintage" serial communications protocol, we still do not have autonegotiation (and auto-baud-detection) built into the protocol definitions. Why not?
Why have nobody made a request-for-comment about that, then so many people do not have to bother with a myriad of out-of-bound signals and in-band signal (xon, xoff) manually.
It is simply incredibly, that after so many decades, you manually has to find out, how to get it to work.
Please be inspired to release open and free RFC-definitions now, so that "vintage" serial communication will work smoothly - and with backward compatibility and of cause with auto "null-modem" functionality.
I am looking forward to, that all out-of-bound signals can automatically be mapped by software by a series of signal pertubations and response measurements.
I know I am very demanding, but it ought to be possible? At least the software should detect and notify the user, that a null-modem cable connection is required. But it is a bad compromise.
The communications world (and its users) would be much happier with a full blown software solution.
Let us exterminate 232 jumper boxes. They are the ultimate time eating stupid solution, that shows we have given up finding a better solution:
Instead we should have a 232 autonegotiation-box/cable, that can be inserted between no-negotiation 232 equipped equipment.
Having dealt with the RS232 protocol for many years I am wondering why you are suddenly so vexed about it. The protocol precedes computing devices, having been created as a communication protocol for teleprinters (originally the 5 bit code before becoming the 7 bit and 8 bit codes we have today). Auto-negotiation takes intelligence at each end. As that was not available at that time we just accepted the need to get things set-up right before we started.
I am sure there is an official way to propose a new RFC if you need to. You could try and do that if you have some ideas that you would like to see implemented as standard. I am not sure you would get much support with such an old protocol though.
Paul E. Bennett...............
The problem isn't RS232, it's simply end points that don't talk the same language. You can have just as much grief with any other comunication protocol, for exactly the same reasons. In fact, IME some others can deliver heaps more grief in setting up than serial async/RS232.
It's a matter of standards, and in the case of RS232 specifically, there's a lot to like. The voltage levels are compatible, as long as you don't violate the specified cable specs (and even if you do by a reasonable margin) crosstalk won't cause errors, and drivers are specified against damage from wiring errors. Higher up the stack - not RS232 any more, which deals with electrical behaviour only - the ASCII character set is locked in, including various standard control sequences if you want to use them.
Higher up still the situation is less clear, but that's due to the facts that there are a huge multiplicity of client devices involved, with little or no unifying behaviour in many cases, and mutiple producers of equipment. If the devices at each end of the link come from the same supplier, then you're likely to find the setup plug-n-play. If they don't then the chances of mismatch are greater, and the problem isn't clearly owned by anyone except yourself. This lack of ownership is a large part of the problem, and it can only be solved by more encompassing standards.
For starters, because I'm pretty sure there is no RFC mechanism in place for the body that this specification is from. If that body still exists, that is.
Actually no. It's exactly because of all those decades, and the myriad of devices already in the field, that this specification is, for all practical intents and purposes, immutable. No change you could come up with now would help one bit with all those devices. And if a change doesn't achieve anything for the overwhelming majority of applications, what point could there possibly be?