I2C Single Master: peripheral or bit banging?

I hate I2C for several reasons. It's only two-wires bus, but for this reason it is insidious.

I usually use hw peripherals when they are available, because it's much more efficient and smart and because it's the only possibility in many cases. Actually we have MCUs with abundant UARTs, timers and so on, so there's no real story: choose a suitable MCU and use that damn peripheral. So I usually start using I2C peripherals available in MCUs, but I found many issues.

I have experience with AVR8 and SAMC21 by Atmel/Microchip. In both cases the I2C peripheral is much more complex than UART or similar serial lines. I2C Single Master, that is the most frequent situation, is very simple, but I2C Multi Master introduces many critical situations. I2C peripherals usually promise to be compatible with multi-master, so their internal state machine is somewhat complex... and often there's some bug or situations that aren't expected that leave the code stucks at some point.

I want to write reliable code that not only works most of the time, but that works ALL the time, in any situations (ok, 99%). So my first test with I2C is making a temporary short between SCL and SDA. In this case, I2C in SAMC21 (they named it SERCOM in I2C Master mode) hangs forever. The manual says to write ADDR register to start putting the address on the bus and wait for an interrupt flag when it ends. This interrupt is never fired up. I see the lines goes down (because START bit is putting low SDA before SCL), but the INTFLAG bits stay cleared forever. Even error bits in STATUS register (bus error, arbitration lost, any sort of timeout...) stay cleared and the BUSSTATE is IDLE. As soon as the short is removed, the state-machine goes on.

Maybe I'm wr "Note:? Violating the protocol may cause the I2C to hang. If this happens it is possible to recover from this state by a software reset (CTRLA.SWRST='1')."

I think the driver code should trust the hw, between them there's a contract, otherwise it's impossibile. For a UART driver, you write DATA register and wait an interrupt flag when a new data can be written in the register. If the interrupt nevers fire, the driver hangs forever. But I have never seen a UART driver that uses a timeout to recover from a hardware that could hang. And I used UARTs for many years now.

Considering all these big issues when you want to write reliable code, I'm considering to wipe again the old and good bit banging technique. For I2C Single Master scenario, it IS very simple: put data low/high (three-state), put clock low/high. The only problem is to calibrate the clock frequency, but if you a free timer it will be simple too.

What is the drawback of bit banging? Maybe you write a few additional lines of code (you have to spit off 9 clock pulses by code), but I don't think much more than using a peripheral and protect it with a timeout. But you earn a code that is fully under your control and you know when the I2C transaction starts and you can be sure it will end, even when there are some hw issues on the board.

[1]
formatting link
[2]
formatting link
Reply to
pozz
Loading thread data ...

s

r

g. If this

t
658119259e605aca74d
  1. The interrupt will only fire if a connected slave acknowledges the address. If you want to catch the situation of a non-acknowledged start & address byte, you have to set up a timer that times out.

t pulse SCL as fast as you can/need (within the spec). Clients can adjust the speed pulling down SCL when they can't keep up with the master's spee d.

  1. As you not only have to bit-bang SCL & SDA according to the protocol,

master correctly is not trivial; additionally, the CPU load is remarkable .

on the according linux driver sources. They are often very reliable (at least for chip families that are a bit mature), and if there exist any issues, they are documented (cf. Raspi's SPI driver bug in the first versions).

Regards Bernd

Reply to
Bernd Linsel

Il 20/11/2020 11:38, Bernd Linsel ha scritto: > On 20.11.2020 09:43, pozz wrote: >> I hate I2C for several reasons. It's only two-wires bus, but for this >> reason it is insidious. >> >> I usually use hw peripherals when they are available, because it's >> much more efficient and smart and because it's the only possibility in >> many cases. >> Actually we have MCUs with abundant UARTs, timers and so on, so >> there's no real story: choose a suitable MCU and use that damn >> peripheral. >> So I usually start using I2C peripherals available in MCUs, but I >> found many issues. >> >> I have experience with AVR8 and SAMC21 by Atmel/Microchip. In both >> cases the I2C peripheral is much more complex than UART or similar >> serial lines. I2C Single Master, that is the most frequent situation, >> is very simple, but I2C Multi Master introduces many critical situations. >> I2C peripherals usually promise to be compatible with multi-master, so >> their internal state machine is somewhat complex... and often there's >> some bug or situations that aren't expected that leave the code stucks >> at some point. >> >> I want to write reliable code that not only works most of the time, >> but that works ALL the time, in any situations (ok, 99%). So my first >> test with I2C is making a temporary short between SCL and SDA. In this >> case, I2C in SAMC21 (they named it SERCOM in I2C Master mode) hangs >> forever. The manual says to write ADDR register to start putting the >> address on the bus and wait for an interrupt flag when it ends. This >> interrupt is never fired up. I see the lines goes down (because START >> bit is putting low SDA before SCL), but the INTFLAG bits stay cleared >> forever. Even error bits in STATUS register (bus error, arbitration >> lost, any sort of timeout...) stay cleared and the BUSSTATE is IDLE. >> As soon as the short is removed, the state-machine goes on. >> >> Maybe I'm wrong, so I studied Atmel Software Framework[1] and Arduino >> Wire libraries[2]. In both cases, a timeout is implemented at the >> driver level. >> >> Even the datasheet says: >> >> "Note:? Violating the protocol may cause the I2C to hang. If this >> happens it is possible to recover from this state by a >> software reset (CTRLA.SWRST='1')." >> >> I think the driver code should trust the hw, between them there's a >> contract, otherwise it's impossibile. For a UART driver, you write >> DATA register and wait an interrupt flag when a new data can be >> written in the register. If the interrupt nevers fire, the driver >> hangs forever. >> But I have never seen a UART driver that uses a timeout to recover >> from a hardware that could hang. And I used UARTs for many years now. >> >> >> Considering all these big issues when you want to write reliable code, >> I'm considering to wipe again the old and good bit banging technique. >> For I2C Single Master scenario, it IS very simple: put data low/high >> (three-state), put clock low/high. The only problem is to calibrate >> the clock frequency, but if you a free timer it will be simple too. >> >> What is the drawback of bit banging? Maybe you write a few additional >> lines of code (you have to spit off 9 clock pulses by code), but I >> don't think much more than using a peripheral and protect it with a >> timeout. >> But you earn a code that is fully under your control and you know when >> the I2C transaction starts and you can be sure it will end, even when >> there are some hw issues on the board. >> >> >> >> >> >> [1] >>

formatting link

"If there is no I2C slave device responding to the address packet, then the INTFLAG.MB interrupt flag and STATUS.RXNACK will be set. The clock hold is active at this point, preventing further activity on the bus."

CPU load? Many times I2C is used in a blocking way waiting for interrupt flag. In this case, there's no difference if the CPU waits for an interrupt flag or drive SCL and SDA lines.

Even if you need non-blocking driver, you could use a hw timer and bit bangs in the interrupt service routine of the timer.

Reply to
pozz

I think the issue is by what you did when the controller started the cycle and issued a start bit to 'get' the bus, it sees that 'someone' else did the same thing but got farther.

A Start bit is done by, with SCL and SDA both high, first pull SDA low, and then SCL low a bit later. When the controller pulls SDA low, it then looks and sees SCL already low, so it decides that someone else beat it to the punch of getting the bus, so it backs off and waits. I suspect that at that point it releases the bus, SDA and SCL both go high at the same time (which is a protocol violation) and maybe the controller sees that as a stop bit and the bus now free, so it tries again, or it just thinks the bus is still busy.

This is NOT the I2C 'Arbitration Lost' condition, as that pertains to the case where you think you won the arbitration, but at the same time somoeone else also thought they won it, and while sending a bit, you find that your 1 bit became a 0 bit, so you realize (late) that you had lost the arbitarion, and thus need to abort your cycle and resubmit it.

This is a case of arbitration never won, and most devices will require something external to the peripheral to supply any needed timeout mechanism.

Most bit-banged master code I have seen, assumes single-master, as it can't reliably test for this sort of arbitration lost condition, being a bit to slow.

Reply to
Richard Damon

Il 20/11/2020 14:09, Richard Damon ha scritto: > I think the issue is by what you did when the controller started the > cycle and issued a start bit to 'get' the bus, it sees that 'someone' > else did the same thing but got farther. > > A Start bit is done by, with SCL and SDA both high, first pull SDA low, > and then SCL low a bit later. When the controller pulls SDA low, it then > looks and sees SCL already low, so it decides that someone else beat it > to the punch of getting the bus, so it backs off and waits. I suspect > that at that point it releases the bus, SDA and SCL both go high at the > same time (which is a protocol violation) and maybe the controller sees > that as a stop bit and the bus now free, so it tries again, or it just > thinks the bus is still busy. No, SCL and SDA stay low forever. Maybe it drives low SDA, than SCL, than releases one of SCL or SDA, failing in that.

Reply to
pozz

The big advantage of bit banging is reliability. I2C is an edge-triggered protocol. In our experience, some I2C peripherals are very prone to error or lockup on fast noise pulses.

A client with a train control application carefully wrote an I2C peripheral driver. On test, it failed a few times a day. As a reference, the client replaced the driver with our old bit-bang driver. In two weeks, there were no failures.

Yes, a bit-bang driver needs to be carefully designed if CPU load is an issue. Choice of buffer chips can be useful in a high noise environment, e.g. hospital autoclave with switched heating elements.

Stephen

--
Stephen Pelc, stephen@vfxforth.com
Reply to
Stephen Pelc

If the MCU provides all the necessary capability to bit bang, there is no downside for single cycle operations. I've bit banged I2C in my career a handful of times just because I was too lazy to learn about the I2C peripheral. My plan was always to replace the bit banging with the peripheral *if necessary*. Usually data throughput requirements drive whether or not I will use the peripheral instead of bit banging. You can get great speed and efficiency improvements using the I2C peripheral with DMA.

There is no shame in implementing bit banging.

You might encounter an I2C device that requires full or partial bit banging. For example, I encountered a device that issued a non-I2C pulse during some part of an I2C transaction series. The pulse represented the completion of an internal ADC voltage conversion and indicated it was time to collect the data value. I was on the fence on whether to bit bang or use the peripheral hardware.

JJS

Reply to
John Speth

peek on

ny

I'm fully aware of that. But linux drivers often disclose some h/w caveats and workarounds, or efficient strategies in dealing with the peripheral's peculiarities...

Regards Bernd

Reply to
Bernd Linsel

If you connect SCL and SDA together, you'll create a permanent protocol violation. The whole I2C relies on both being separate and open-collector/drain. Creating an unexpected short creates a hardware failure. If ouy're afraid of such a situation, you should test for it bit-banging before initializing the hardware controller.

--

-TV
Reply to
Tauno Voipio

Il 20/11/2020 16:25, Tauno Voipio ha scritto: > On 20.11.20 16.15, pozz wrote: >> Il 20/11/2020 14:09, Richard Damon ha scritto: >> > I think the issue is by what you did when the controller started the >> > cycle and issued a start bit to 'get' the bus, it sees that 'someone' >> > else did the same thing but got farther. >> > >> > A Start bit is done by, with SCL and SDA both high, first pull SDA >> low, >> > and then SCL low a bit later. When the controller pulls SDA low, it >> then >> > looks and sees SCL already low, so it decides that someone else >> beat it >> > to the punch of getting the bus, so it backs off and waits. I suspect >> > that at that point it releases the bus, SDA and SCL both go high at >> the >> > same time (which is a protocol violation) and maybe the controller >> sees >> > that as a stop bit and the bus now free, so it tries again, or it just >> > thinks the bus is still busy. >> No, SCL and SDA stay low forever. Maybe it drives low SDA, than SCL, >> than releases one of SCL or SDA, failing in that. >> >> >> > This is NOT the I2C 'Arbitration Lost' condition, as that pertains to >> > the case where you think you won the arbitration, but at the same time >> > somoeone else also thought they won it, and while sending a bit, you >> > find that your 1 bit became a 0 bit, so you realize (late) that you >> had >> > lost the arbitarion, and thus need to abort your cycle and resubmit >> it. >> Ok, call it bus error, I2C violation, I don't know. The peripheral is >> full of low-level timeouts and flags signaling that some strange >> happened. >> But shorting SDA and SCL will not set any of this bit. >> >> >> > This is a case of arbitration never won, and most devices will require >> > something external to the peripheral to supply any needed timeout >> mechanism. >> At least the peripheral should be able to report the strange bus >> state, but it STATUS.BUSTATE is always IDLE. >> >> >> > Most bit-banged master code I have seen, assumes single-master, as it >> > can't reliably test for this sort of arbitration lost condition, >> being a >> > bit to slow. >> Of course, take a look at the subject of my post. > > > If you connect SCL and SDA together, you'll create a permanent > protocol violation. The whole I2C relies on both being separate > and open-collector/drain. Creating an unexpected short creates > a hardware failure. If ouy're afraid of such a situation, you should > test for it bit-banging before initializing the hardware controller. I know that and I don't expect it works in this situation, but my point is another.

If a I2C hw peripheral could hang for some reason (in my test I volunteerly made the short, but I imagine the hang could happen in other circumstances that is not well declared in the datasheet), you should protect the driver code with a timeout. You have to test your code in all cases, even when the timeout occurs. So you have to choose the timeout interval with great care, you have to understand if the blocking for so long is acceptable (even in that rare situation).

Considering all of that, maybe bit-banging is much more simple and reliable.

Reply to
pozz

I have had thousands of industrial instruments in the field for decades, each running some internal units with I2C, some bit-banged and others on the hardware interfaces on the processors used, and not a single failure due to I2C hanging.

Please remember that the I2C bus is an Inter-IC bus, not to be used for connections to the outside of the device, preferably only on the same circuit board. There should be no external connectors where e.g. the shorts between the SCL and SDA could happen.

All the hardWare I2C controls have been able to be restored to a sensible state with a software reset after a time-out. This includes the Atmel chips.

--

-TV
Reply to
Tauno Voipio

I did manage once to upset (not to hang) an I2C line. I had routed SCL or SDA (likely both, don't remember) routed quite close to the switching MOSFET of a HV flyback which makes nice and steep 100V edges... :-).

I have dealt with I2C controllers on 2 parts I can think of now and both times it took me a lot longer to get them to work than it had taken me on two earlier occasions when I bit banged it though... They all did work of course but the design was sort of twisted, I remember one of them took me two days and I was counting minutes of my time for that project. It may even have been 3 days, was

10 years ago.

Dimiter

====================================================== Dimiter Popoff, TGI

formatting link
======================================================
formatting link

Reply to
Dimiter_Popoff

That's my experience also. I've done bit-banged I2C a couple times, and it took about a half day each time. Using HW I2C controllers has always taken longer. The worst one I remember was on a Samsung ARM7 part from 20 years ago. Between the mis-translations and errors in the documentation and the bugs in the HW, it took at least a week to get the I2C controller to reliably talk to anything.

--
Grant
Reply to
Grant Edwards

If you do a bit banged interface do not forget to support clock stretching by the slave. Do not assume that the slave has no special timing requirements. To do it right you need a hardware timer (or a cast iron guarantee that the bit bang function won't be interrupted).

I've found hardware I2C controllers on micros to be 100% reliably a problem. The manufacturers drivers are often part of that problem.

I'm currently trying to debug some one else's not working implementation of an ST I2C peripheral controller. It uses ST's driver.

MK

Reply to
Michael Kellett

To add to that, the drivers by the hardware makers are also quite twisted and difficult to integrate to surrounding software.

With ARM Cortexes, I'm not veery fascinated with the provided drivers in CMSIS. Every time I have ended writing my own.

--

-TV
Reply to
Tauno Voipio

I have ended up jettisoning both ST's and Atmel's drivers and written my own. You might consider that way.

--

-TV
Reply to
Tauno Voipio

you might be re-starting the cpu in the middle of an operation without there being a power-on reset to the slave devices. That can easily leave the bus in an invalid state, or leave a slave state machine out of synchronisation with the bus. But I have not seen this kind of thing happen in a live system.

Reply to
David Brown

If the slave should use clock streching, I think the datasheet would say this clearly.

Please, explain. I2C is synchronous to clock transmitted by the Master. Of course Master should respect a range for the clock frequency (around

100kHz or 400kHz), but I don't think a jitter on the I2C clock, caused by an interrupt, could be a serious problem for the slave.
Reply to
pozz

Il 21/11/2020 12:06, David Brown ha scritto: > On 20/11/2020 18:39, Tauno Voipio wrote: > >> I have had thousands of industrial instruments in the field for decades, >> each running some internal units with I2C, some bit-banged and others >> on the hardware interfaces on the processors used, and not a single >> failure due to I2C hanging. >> >

At startup MCU read the EEPROM content and if it was corrupted, factory default are used and wrote to the EEPROM. This mechanism was introduced to write a blank EEPROM at the very first power up of a fresh board.

Unfortunately it sometimes happened that the MCU reset in the middle of a I2C transaction with the EEPROM (the reset was caused by a glitch on the power supply that triggered a MCU voltage supervisor). When the MCU restarted, it tried to communicate with the EEPROM, but it was in a not synchronized I2C state. This is well described in AN868[1] from Analog Devices..

The MCU thoughts it was a blank EEPROM and factory settings was used, overwriting user settings! What the user blamed was that the machine sometimes restarted with factory settings, losing user settings.

In that case the solution was adding a I2C reset procedure at startup (some clock pulses and STOP condition as described in the Application Note). I think this I2C bus reset procedure must be always added where there's a I2C bus and most probably it must be implemented by a big-bang code.

[1]
formatting link
Reply to
pozz

Sure, add that kind of a reset at startup - it also helps if you are unlucky when restarting the chip during development.

Also make sure you write two copies of the user data to the EEPROM, so that you can survive a crash while writing to it.

But if your board is suffering power supply glitches that are enough to trigger the MCU brown-out, but not enough to cause a proper re-start of the rest of the board, then /that/ is a major problem that you should be trying to solve.

Reply to
David Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.