YOU may consider it a design flaw, but I have seen too many serial ports having this flaw in them to just totally ignore it.
Yes, the "robust" design will allow for a short stop bit, but you can't count on all serial adaptors allowing for it.
Part of the problem is that (at least as far as I know) the Asynchronous Serial Format isn't actually a "Published Standard", but just an de-facto protocol that is simple enough that it mostly just works, but still hides a few gotchas for corner cases.
I'm always curious about how things are implemented. I thought I had heard somewhere that the FTDI chip was a fast, but small processor. I design those for use in FPGA designs and they can be very effective. Often the code is very minimal.
That should have been, "cut back the number of commands".
Zero need for a processor in the FPGA at this point. At least the need for a conventional processor. The commands are things like, assert pin X, read pin Y. A test of some basic functionality that could be debugged separately from other tests would be a few of these instructions. Very easy to do in an FPGA by using memory blocks and stepping through the commands. But I'm open to a processor. It would be one of my own design, however.
That is exceedingly hard to imagine, since it would take extra logic to implement. The logic of a UART is to first, detect the start bit which lands the state machine in the middle of said start bit which then times to the middle of all subsequent bits (ignoring timing accuracy). So it thinks it is in the middle of the stop bit when the bit timing is complete. It would need to have more hardware to time to the end of the stop bit. This might be present, for other purposes, but it should not be used to control looking for the start bit. This is by definition of the async protocol, to use the stop bit time to resync to the next start bit. Any device that does not start looking for a new start bit at the point it thinks is the middle of the stop bit, is defective by definition and will never work properly with timing mismatches of one polarity, the receiver's clock being slower than the transmitter clock.
I guess I'm not certain that would cause an error, actually. It would initiate the start bit detection logic, and as long as it does not require seeing the idle condition before detecting the start bit condition, it would still work. Again, this is expected by the definition of asynchronous format. This would result in a grosser offset in timing the middle of the bits, so the allowable timing error is less. But it will still work otherwise. 5% is a very large combined error. Most devices are timed by crystals with maybe ±200 ppm error.
There's always garbage designs. I'm surprised I never ran into one. I guess being crystal controlled, there was never enough error to add up to a bit.
True, but anyone designing chips should understand what they are designing. If they don't, you get garbage. I learned that lesson in a class in school where I screwed up a detail on a program I wrote, because I didn't understand the spec. I've always tried to ask questions since and even if they seem like stupid questions, I don't read the specs wrong.
There's nothing wrong with curiosity. However, I have no doubt that you heard wrong, or heard about different FTDI devices, or that your source heard wrong. FTDI have been making these things for a couple of decades, since the earliest days of USB. You can be sure they are hardware peripherals, not software.
For /you/, and /your/ designs in FPGAs, adding a small processor can be a good solution. The balance is different for ASICs and for dedicated silicon, and it is different now than it was when FTDI made their MPSE block for use in their devices.
Really, we are not talking about a peripheral that is much more advanced than common serial communication blocks. It multiplexes a UART, an SPI and an I²C on the same pins. That's it. You don't bother with a processor and software for that.
FTDI /do/ make devices using embedded processors, with a few different types (I forget which - perhaps Tensila cores). But those are other chips.
Depends on how you design it. IF you start a counter at the leading edge of the start bit and then detect the counter at its middle value, then the stop bit ends when the counter finally expires at the END of the stop bit.
IF you don't start the looking for the start bit until the time has passed for the END of the stop bit, and the receiver is 0.1% slow, then every bit you lose 0.1% of a bit, or 1% per character, so after 50 consecutive characters you are 1/2 a bit late, and getting errors.
As I pointed out, 0.1% means 50 characters. 0.001% means 5000 characters, long enough string of characters and eventually you hit the problem.
If you only use short messages, you never have a problem.
The problem is that if you describe the sampling as "Middle of bit", then going to the end of the stop bit makes sense.
If you are adding functionality like RS-485 control that needs to know when that end of bit is, and it is easy to forget that the receiver has different needs than the transmitter.
There is still some extra logic to distinguish the condition. There is a bit timing counter, and a counter to track which bit you are in. Everything happening in the operation of the UART is happening at the middle of a bit. Then you need extra logic to distinguish the end of a bit.
There you go! You have just proven that no one would design a UART to work this way and for it to be used in the market place. There would be too many applications where the data burst would cause it to not work. Programming around such a design flaw would be such a PITA and expose the flaw, that the part would become a pariah.
I recall the Intel USART was such a part for other technical flaws. So they finally came out with a new version that fixed the problems.
You mean if you have gaps with idle time.
Sorry, you are not clear. This doesn't make sense to me. What is "going to the end of the stop bit"?
Yeah, but you can still insist that the stop bit fills 99%, or 90% of the required time, and not get that pathology.
This is a branch of the principle "be rigorous in what you produce, permissive in what you accept". I've personally moved away from that principle - I think being permissive too often just masks problems until they re-occur downstream but cannot be diagnosed there. So I'm much more willing to reject bad input (or to complain but still accept it) early on.
I'm not clear on what you are saying. The larger the clock difference, the earlier the receiver has to look for the start bit. It will work just fine with the start bit check being delayed until the end of the stop bit, as long as the timing clocks aren't offset in one direction. Looking for the start bit in the middle of the stop bit gives a total of 5% tolerance, pretty much taking mistiming out of the list of problems for async data transmission. Drop that to 0.05% (your 99% example) and you are in the realm of crystal timing error on the two systems, ±250 ppm.
" IF you don't start the looking for the start bit until the time has passed for the END of the stop bit, and the receiver is 0.1% slow, then every bit you lose 0.1% of a bit "
But if you wait until 95% of the stop bit time, and allow a new start bit to come early by 5%, then it doesn't matter if "the receiver is 0.1% slow" and you don't lose sync; the 5% early doesn't mount up over "50 consecutive characters".
Same if you wait 99% and the new start bit is only 1% early.
So your "There you go! You have just proven..." was a bogus situation proposed by Richard, that's trivially avoided, and basically all actual UARTs will do that,
If you cherry pick your numbers, you can make anything work. Looking for a start bit at the middle of the stop bit gives you the ±5% tolerance of timing. If you delay when you start looking for a start bit, you reduce this tolerance. So, in that case, if you are happy to provide a ±0.1% tolerance clock under all conditions, then sure, you can look for the start bit later. In the real world, there are users who expect a UART to work the way it is supposed to work, and use less accurate timing references than a crystal. This UART won't work for them and that would become known to users in general. While a claim has been made that such UARTs exist, no one has provided information about one.
I would also point out that the above timing analysis is not actually worse case since it does not take into account the 1/16th or 1/8th bit jitter from the first character start bit detection. So the requirements on the timing reference are even tighter when using the sloppy timing for start bit checking.
Nope, simplest logic is to have your 8x sub-bit counter start at 0 and count up starting on the leading edge, on the count values of 3, 4, and
5 you sample the bit for noise detection, and roll over from 7 to 0 for the next bit, and count to the next bit. You stop the counter when it rolls from 7 to 0 in the stop bit, and counts past the stop bit.
Except that we have bought many USB serial ports with just this flaw in them.
So I guess the nobody actually exists.
Seem to be based on an FTDI chip, but maybe just a "look alike", where they did bare minimum design work.
The key point is that very few applications actually do have very long uninterrupted sequences of characters, and typical PCs will tend to naturally add small spaces just becuase the OS isn't that great. Doesn't require much to fix the issue.
You've conveniently left out a significant amount of logic.
Detecting specific states of the sub-bit counter uses more logic than other function. Most UARTs use 16 sub-samples and so have a 4 bit counter. Counters have a carry chain built in, so the carry out is a free zero count detector.
Counters are most efficient in terms of implementation when done as down counters, with various preloads. The counter is loaded with the half bit count while waiting for the leading edge of the start bit. The same zero detection (carry out) that triggers the next load is also the bit center mark. All loads during an active character will load a full bit count (different in the msb only). Every zero detect will mark a bit center. To get to the end of the final stop bit would require loading the counter with another half bit count, so extra logic. More than anything, why would anyone want to think about adding the extra half bit count when it's not part of any requirement?
Oh, you mean the Chinese UARTs that most people won't touch because they are full of flaws! Got it.
I was talking about real UARTs that people use in real designs. I used to buy CH340 based USB cables for work. But we eventually figured out that they were unreliable and I only use FTDI cables now. The CH340 cables seemed to work, but would quit after an hour or two or three.
There are lots of clones. If you have an FTDI chip with this stop bit problem, I'd love to see it. I think FTDI would love to see it too.
The key point is that a company like FTDI is not going to sell such crap. "Fixing" such issues is only possible if you have control over the system. Not everyone is designing a system from scratch. My brother's company makes a device that interfaces to a measurement device outputting data periodically. For who knows what reason, that company changed the product so it stopped outputting the headers. So a small box was made to add the headers every few lines. The UARTs in it just have to work correctly, since there's no option to modify any other piece of equipment. If they don't work correctly, they get pulled and they use other equipment, and the original maker gets a black eye. Enough black eyes and people don't buy that equipment anymore.