I2C Single Master: peripheral or bit banging?

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View
I hate I2C for several reasons. It's only two-wires bus, but for this  
reason it is insidious.

I usually use hw peripherals when they are available, because it's much  
more efficient and smart and because it's the only possibility in many  
cases.
Actually we have MCUs with abundant UARTs, timers and so on, so there's  
no real story: choose a suitable MCU and use that damn peripheral.
So I usually start using I2C peripherals available in MCUs, but I found  
many issues.

I have experience with AVR8 and SAMC21 by Atmel/Microchip. In both cases  
the I2C peripheral is much more complex than UART or similar serial  
lines. I2C Single Master, that is the most frequent situation, is very  
simple, but I2C Multi Master introduces many critical situations.
I2C peripherals usually promise to be compatible with multi-master, so  
their internal state machine is somewhat complex... and often there's  
some bug or situations that aren't expected that leave the code stucks  
at some point.

I want to write reliable code that not only works most of the time, but  
that works ALL the time, in any situations (ok, 99%). So my first test  
with I2C is making a temporary short between SCL and SDA. In this case,  
I2C in SAMC21 (they named it SERCOM in I2C Master mode) hangs forever.  
The manual says to write ADDR register to start putting the address on  
the bus and wait for an interrupt flag when it ends. This interrupt is  
never fired up. I see the lines goes down (because START bit is putting  
low SDA before SCL), but the INTFLAG bits stay cleared forever. Even  
error bits in STATUS register (bus error, arbitration lost, any sort of  
timeout...) stay cleared and the BUSSTATE is IDLE. As soon as the short  
is removed, the state-machine goes on.

Maybe I'm wrong, so I studied Atmel Software Framework[1] and Arduino  
Wire libraries[2]. In both cases, a timeout is implemented at the driver  
level.

Even the datasheet says:

   "Note:? Violating the protocol may cause the I2C to hang. If this
   happens it is possible to recover from this state by a
   software reset (CTRLA.SWRST='1')."

I think the driver code should trust the hw, between them there's a  
contract, otherwise it's impossibile. For a UART driver, you write DATA  
register and wait an interrupt flag when a new data can be written in  
the register. If the interrupt nevers fire, the driver hangs forever.
But I have never seen a UART driver that uses a timeout to recover from  
a hardware that could hang. And I used UARTs for many years now.


Considering all these big issues when you want to write reliable code,  
I'm considering to wipe again the old and good bit banging technique.
For I2C Single Master scenario, it IS very simple: put data low/high  
(three-state), put clock low/high. The only problem is to calibrate the  
clock frequency, but if you a free timer it will be simple too.

What is the drawback of bit banging? Maybe you write a few additional  
lines of code (you have to spit off 9 clock pulses by code), but I don't  
think much more than using a peripheral and protect it with a timeout.
But you earn a code that is fully under your control and you know when  
the I2C transaction starts and you can be sure it will end, even when  
there are some hw issues on the board.





[1]  
https://github.com/avrxml/asf/blob/68cddb46ae5ebc24ef8287a8d4c61a6efa5e2848/sam0/drivers/sercom/i2c/i2c_sam0/i2c_master.c#L406

[2]  
https://github.com/acicuc/ArduinoCore-samd/commit/64385453bb549b6d2f868658119259e605aca74d

Re: I2C Single Master: peripheral or bit banging?
On 20.11.2020 09:43, pozz wrote:
Quoted text here. Click to load it
  
Quoted text here. Click to load it

  
Quoted text here. Click to load it
  
Quoted text here. Click to load it
s  
Quoted text here. Click to load it



  
Quoted text here. Click to load it

  
Quoted text here. Click to load it



  
Quoted text here. Click to load it
  
Quoted text here. Click to load it
  
Quoted text here. Click to load it
r  
Quoted text here. Click to load it

g. If this


Quoted text here. Click to load it
  
Quoted text here. Click to load it
  
Quoted text here. Click to load it

  
Quoted text here. Click to load it
t  
Quoted text here. Click to load it

2848/sam0/drivers/sercom/i2c/i2c_sam0/i2c_master.c#L406  
Quoted text here. Click to load it
658119259e605aca74d  
Quoted text here. Click to load it

1. The interrupt will only fire if a connected slave acknowledges the  
address. If you want to catch the situation of a non-acknowledged start  
& address byte, you have to set up a timer that times out.


t  
pulse SCL as fast as you can/need (within the spec). Clients can adjust  
the speed pulling down SCL when they can't keep up with the master's spee
d.

3. As you not only have to bit-bang SCL & SDA according to the protocol,  


master correctly is not trivial; additionally, the CPU load is remarkable
.


 on  
the according linux driver sources. They are often very reliable (at  
least for chip families that are a bit mature), and if there exist any  
issues, they are documented (cf. Raspi's SPI driver bug in the first  
versions).

Regards
Bernd



Re: I2C Single Master: peripheral or bit banging?
Il 20/11/2020 11:38, Bernd Linsel ha scritto:
 > On 20.11.2020 09:43, pozz wrote:
 >> I hate I2C for several reasons. It's only two-wires bus, but for this
 >> reason it is insidious.
 >>
 >> I usually use hw peripherals when they are available, because it's
 >> much more efficient and smart and because it's the only possibility in
 >> many cases.
 >> Actually we have MCUs with abundant UARTs, timers and so on, so
 >> there's no real story: choose a suitable MCU and use that damn
 >> peripheral.
 >> So I usually start using I2C peripherals available in MCUs, but I
 >> found many issues.
 >>
 >> I have experience with AVR8 and SAMC21 by Atmel/Microchip. In both
 >> cases the I2C peripheral is much more complex than UART or similar
 >> serial lines. I2C Single Master, that is the most frequent situation,
 >> is very simple, but I2C Multi Master introduces many critical  
situations.
 >> I2C peripherals usually promise to be compatible with multi-master, so
 >> their internal state machine is somewhat complex... and often there's
 >> some bug or situations that aren't expected that leave the code stucks
 >> at some point.
 >>
 >> I want to write reliable code that not only works most of the time,
 >> but that works ALL the time, in any situations (ok, 99%). So my first
 >> test with I2C is making a temporary short between SCL and SDA. In this
 >> case, I2C in SAMC21 (they named it SERCOM in I2C Master mode) hangs
 >> forever. The manual says to write ADDR register to start putting the
 >> address on the bus and wait for an interrupt flag when it ends. This
 >> interrupt is never fired up. I see the lines goes down (because START
 >> bit is putting low SDA before SCL), but the INTFLAG bits stay cleared
 >> forever. Even error bits in STATUS register (bus error, arbitration
 >> lost, any sort of timeout...) stay cleared and the BUSSTATE is IDLE.
 >> As soon as the short is removed, the state-machine goes on.
 >>
 >> Maybe I'm wrong, so I studied Atmel Software Framework[1] and Arduino
 >> Wire libraries[2]. In both cases, a timeout is implemented at the
 >> driver level.
 >>
 >> Even the datasheet says:
 >>
 >>    "Note:? Violating the protocol may cause the I2C to hang. If this
 >>    happens it is possible to recover from this state by a
 >>    software reset (CTRLA.SWRST='1')."
 >>
 >> I think the driver code should trust the hw, between them there's a
 >> contract, otherwise it's impossibile. For a UART driver, you write
 >> DATA register and wait an interrupt flag when a new data can be
 >> written in the register. If the interrupt nevers fire, the driver
 >> hangs forever.
 >> But I have never seen a UART driver that uses a timeout to recover
 >> from a hardware that could hang. And I used UARTs for many years now.
 >>
 >>
 >> Considering all these big issues when you want to write reliable code,
 >> I'm considering to wipe again the old and good bit banging technique.
 >> For I2C Single Master scenario, it IS very simple: put data low/high
 >> (three-state), put clock low/high. The only problem is to calibrate
 >> the clock frequency, but if you a free timer it will be simple too.
 >>
 >> What is the drawback of bit banging? Maybe you write a few additional
 >> lines of code (you have to spit off 9 clock pulses by code), but I
 >> don't think much more than using a peripheral and protect it with a
 >> timeout.
 >> But you earn a code that is fully under your control and you know when
 >> the I2C transaction starts and you can be sure it will end, even when
 >> there are some hw issues on the board.
 >>
 >>
 >>
 >>
 >>
 >> [1]
 >>  
https://github.com/avrxml/asf/blob/68cddb46ae5ebc24ef8287a8d4c61a6efa5e2848/sam0/drivers/sercom/i2c/i2c_sam0/i2c_master.c#L406

 >>
 >>
 >> [2]
 >>  
https://github.com/acicuc/ArduinoCore-samd/commit/64385453bb549b6d2f868658119259e605aca74d

 >>
 >
 > 1. The interrupt will only fire if a connected slave acknowledges the
 > address. If you want to catch the situation of a non-acknowledged start
 > & address byte, you have to set up a timer that times out.
False, at least for SERCOM in I2C Master mode (but I suspect other MCUs  
behaviour is the same).
Quoting from C21 datasheet:

   "If there is no I2C slave device responding to the address packet,
   then the INTFLAG.MB interrupt flag and
   STATUS.RXNACK will be set. The clock hold is active at this point,
   preventing further activity on the bus."



 > pulse SCL as fast as you can/need (within the spec). Clients can adjust
 > the speed pulling down SCL when they can't keep up with the master's  
speed.
Yes, I know. But you need some pause in bit banging, otherwise you will  
run at a speed too high. And you need a calibration if you use a dumb  
loop on a volatile counter.


 > 3. As you not only have to bit-bang SCL & SDA according to the protocol,

 > master correctly is not trivial; additionally, the CPU load is  
remarkable.
If I'm not wrong, this happens only if you have some slaves that stretch  
the clock to slow down the transfer. Even in this case, I don't think  
monitoring SCL line during the transfer is complex. Yes, you should have  
a timeout, but you need it even when you use hw peripheral.

CPU load? Many times I2C is used in a blocking way waiting for interrupt  
flag. In this case, there's no difference if the CPU waits for an  
interrupt flag or drive SCL and SDA lines.

Even if you need non-blocking driver, you could use a hw timer and bit  
bangs in the interrupt service routine of the timer.



 > the according linux driver sources. They are often very reliable (at
 > least for chip families that are a bit mature), and if there exist any
 > issues, they are documented (cf. Raspi's SPI driver bug in the first
 > versions).
I'm talking about MCUs. Linux can't run on these devices.



Re: I2C Single Master: peripheral or bit banging?
On 20.11.2020 12:45, pozz wrote:

Quoted text here. Click to load it

 peek on
Quoted text here. Click to load it

ny



I'm fully aware of that. But linux drivers often disclose some h/w  
caveats and workarounds, or efficient strategies in dealing with the  
peripheral's peculiarities...

Regards
Bernd



Re: I2C Single Master: peripheral or bit banging?
I think the issue is by what you did when the controller started the
cycle and issued a start bit to 'get' the bus, it sees that 'someone'
else did the same thing but got farther.

A Start bit is done by, with SCL and SDA both high, first pull SDA low,
and then SCL low a bit later. When the controller pulls SDA low, it then
looks and sees SCL already low, so it decides that someone else beat it
to the punch of getting the bus, so it backs off and waits. I suspect
that at that point it releases the bus, SDA and SCL both go high at the
same time (which is a protocol violation) and maybe the controller sees
that as a stop bit and the bus now free, so it tries again, or it just
thinks the bus is still busy.

This is NOT the I2C 'Arbitration Lost' condition, as that pertains to
the case where you think you won the arbitration, but at the same time
somoeone else also thought they won it, and while sending a bit, you
find that your 1 bit became a 0 bit, so you realize (late) that you had
lost the arbitarion, and thus need to abort your cycle and resubmit it.

This is a case of arbitration never won, and most devices will require
something external to the peripheral to supply any needed timeout mechanism.

Most bit-banged master code I have seen, assumes single-master, as it
can't reliably test for this sort of arbitration lost condition, being a
bit to slow.

Re: I2C Single Master: peripheral or bit banging?
Il 20/11/2020 14:09, Richard Damon ha scritto:
 > I think the issue is by what you did when the controller started the
 > cycle and issued a start bit to 'get' the bus, it sees that 'someone'
 > else did the same thing but got farther.
 >
 > A Start bit is done by, with SCL and SDA both high, first pull SDA low,
 > and then SCL low a bit later. When the controller pulls SDA low, it then
 > looks and sees SCL already low, so it decides that someone else beat it
 > to the punch of getting the bus, so it backs off and waits. I suspect
 > that at that point it releases the bus, SDA and SCL both go high at the
 > same time (which is a protocol violation) and maybe the controller sees
 > that as a stop bit and the bus now free, so it tries again, or it just
 > thinks the bus is still busy.
No, SCL and SDA stay low forever. Maybe it drives low SDA, than SCL,  
than releases one of SCL or SDA, failing in that.


 > This is NOT the I2C 'Arbitration Lost' condition, as that pertains to
 > the case where you think you won the arbitration, but at the same time
 > somoeone else also thought they won it, and while sending a bit, you
 > find that your 1 bit became a 0 bit, so you realize (late) that you had
 > lost the arbitarion, and thus need to abort your cycle and resubmit it.
Ok, call it bus error, I2C violation, I don't know. The peripheral is  
full of low-level timeouts and flags signaling that some strange happened.
But shorting SDA and SCL will not set any of this bit.


 > This is a case of arbitration never won, and most devices will require
 > something external to the peripheral to supply any needed timeout  
mechanism.
At least the peripheral should be able to report the strange bus state,  
but it STATUS.BUSTATE is always IDLE.


 > Most bit-banged master code I have seen, assumes single-master, as it
 > can't reliably test for this sort of arbitration lost condition, being a
 > bit to slow.
Of course, take a look at the subject of my post.

Re: I2C Single Master: peripheral or bit banging?
On 20.11.20 16.15, pozz wrote:
Quoted text here. Click to load it


If you connect SCL and SDA together, you'll create a permanent
protocol violation. The whole I2C relies on both being separate
and open-collector/drain. Creating an unexpected short creates
a hardware failure. If ouy're afraid of such a situation, you should
test for it bit-banging before initializing the hardware controller.

--  

-TV


Re: I2C Single Master: peripheral or bit banging?
Il 20/11/2020 16:25, Tauno Voipio ha scritto:
 > On 20.11.20 16.15, pozz wrote:
 >> Il 20/11/2020 14:09, Richard Damon ha scritto:
 >>  > I think the issue is by what you did when the controller started the
 >>  > cycle and issued a start bit to 'get' the bus, it sees that 'someone'
 >>  > else did the same thing but got farther.
 >>  >
 >>  > A Start bit is done by, with SCL and SDA both high, first pull SDA
 >> low,
 >>  > and then SCL low a bit later. When the controller pulls SDA low, it
 >> then
 >>  > looks and sees SCL already low, so it decides that someone else
 >> beat it
 >>  > to the punch of getting the bus, so it backs off and waits. I suspect
 >>  > that at that point it releases the bus, SDA and SCL both go high at
 >> the
 >>  > same time (which is a protocol violation) and maybe the controller
 >> sees
 >>  > that as a stop bit and the bus now free, so it tries again, or it  
just
 >>  > thinks the bus is still busy.
 >> No, SCL and SDA stay low forever. Maybe it drives low SDA, than SCL,
 >> than releases one of SCL or SDA, failing in that.
 >>
 >>
 >>  > This is NOT the I2C 'Arbitration Lost' condition, as that pertains to
 >>  > the case where you think you won the arbitration, but at the same  
time
 >>  > somoeone else also thought they won it, and while sending a bit, you
 >>  > find that your 1 bit became a 0 bit, so you realize (late) that you
 >> had
 >>  > lost the arbitarion, and thus need to abort your cycle and resubmit
 >> it.
 >> Ok, call it bus error, I2C violation, I don't know. The peripheral is
 >> full of low-level timeouts and flags signaling that some strange
 >> happened.
 >> But shorting SDA and SCL will not set any of this bit.
 >>
 >>
 >>  > This is a case of arbitration never won, and most devices will  
require
 >>  > something external to the peripheral to supply any needed timeout
 >> mechanism.
 >> At least the peripheral should be able to report the strange bus
 >> state, but it STATUS.BUSTATE is always IDLE.
 >>
 >>
 >>  > Most bit-banged master code I have seen, assumes single-master, as it
 >>  > can't reliably test for this sort of arbitration lost condition,
 >> being a
 >>  > bit to slow.
 >> Of course, take a look at the subject of my post.
 >
 >
 > If you connect SCL and SDA together, you'll create a permanent
 > protocol violation. The whole I2C relies on both being separate
 > and open-collector/drain. Creating an unexpected short creates
 > a hardware failure. If ouy're afraid of such a situation, you should
 > test for it bit-banging before initializing the hardware controller.
I know that and I don't expect it works in this situation, but my point  
is another.

If a I2C hw peripheral could hang for some reason (in my test I  
volunteerly made the short, but I imagine the hang could happen in other  
circumstances that is not well declared in the datasheet), you should  
protect the driver code with a timeout.
You have to test your code in all cases, even when the timeout occurs.  
So you have to choose the timeout interval with great care, you have to  
understand if the blocking for so long is acceptable (even in that rare  
situation).

Considering all of that, maybe bit-banging is much more simple and reliable.


Re: I2C Single Master: peripheral or bit banging?
On 20.11.20 18.33, pozz wrote:
Quoted text here. Click to load it


Quoted text here. Click to load it



Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it


Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it


I have had thousands of industrial instruments in the field for decades,
each running some internal units with I2C, some bit-banged and others
on the hardware interfaces on the processors used, and not a single
failure due to I2C hanging.

Please remember that the I2C bus is an Inter-IC bus, not to be used for
connections to the outside of the device, preferably only on the same
circuit board. There should be no external connectors where e.g. the
shorts between the SCL and SDA could happen.

All the hardWare I2C controls have been able to be restored to a
sensible state with a software reset after a time-out. This includes
the Atmel chips.

--  

-TV


Re: I2C Single Master: peripheral or bit banging?
On 11/20/2020 19:39, Tauno Voipio wrote:
Quoted text here. Click to load it


Quoted text here. Click to load it

I did manage once to upset (not to hang) an I2C line. I had routed
SCL or SDA (likely both, don't remember) routed quite close to the
switching MOSFET of a HV flyback which makes nice and steep 100V
edges... :-).

I have dealt with I2C controllers on 2 parts I can think of now
and both times it took me a lot longer to get them to work than it
had taken me on two earlier occasions when I bit banged it though...
They all did  work of course but the design was sort of twisted,
I remember one of them took me two days and I was counting minutes
of my time for that project. It may even have been 3 days, was
10 years ago.

Dimiter

======================================================
Dimiter Popoff, TGI             http://www.tgi-sci.com
======================================================
http://www.flickr.com/photos/didi_tgi/





Re: I2C Single Master: peripheral or bit banging?

Quoted text here. Click to load it

That's my experience also. I've done bit-banged I2C a couple times,
and it took about a half day each time. Using HW I2C controllers has
always taken longer. The worst one I remember was on a Samsung ARM7
part from 20 years ago. Between the mis-translations and errors in the
documentation and the bugs in the HW, it took at least a week to get
the I2C controller to reliably talk to anything.

--
Grant



Re: I2C Single Master: peripheral or bit banging?
On 21.11.20 1.09, Grant Edwards wrote:
Quoted text here. Click to load it


To add to that, the drivers by the hardware makers are also quite
twisted and difficult to integrate to surrounding software.

With ARM Cortexes, I'm not veery fascinated with the provided
drivers in CMSIS. Every time I have ended writing my own.

--  

-TV

Re: I2C Single Master: peripheral or bit banging?
On 20/11/2020 18:39, Tauno Voipio wrote:

Quoted text here. Click to load it


you might be re-starting the cpu in the middle of an operation without
there being a power-on reset to the slave devices.  That can easily
leave the bus in an invalid state, or leave a slave state machine out of
synchronisation with the bus.  But I have not seen this kind of thing
happen in a live system.

Quoted text here. Click to load it


Re: I2C Single Master: peripheral or bit banging?
Il 21/11/2020 12:06, David Brown ha scritto:
 > On 20/11/2020 18:39, Tauno Voipio wrote:
 >
 >> I have had thousands of industrial instruments in the field for decades,
 >> each running some internal units with I2C, some bit-banged and others
 >> on the hardware interfaces on the processors used, and not a single
 >> failure due to I2C hanging.
 >>
 >

 > you might be re-starting the cpu in the middle of an operation without
 > there being a power-on reset to the slave devices.  That can easily
 > leave the bus in an invalid state, or leave a slave state machine out of
 > synchronisation with the bus.  But I have not seen this kind of thing
 > happen in a live system.
In the past I had a big problem with a I2C bus on a board. Ubiquitous  
EEPROM 24LC64 connected to a 16-bits MCU by Fujitsu. In that case, I2C  
was implemented by a bit-bang code.

At startup MCU read the EEPROM content and if it was corrupted, factory  
default are used and wrote to the EEPROM. This mechanism was introduced  
to write a blank EEPROM at the very first power up of a fresh board.

Unfortunately it sometimes happened that the MCU reset in the middle of  
a I2C transaction with the EEPROM (the reset was caused by a glitch on  
the power supply that triggered a MCU voltage supervisor).
When the MCU restarted, it tried to communicate with the EEPROM, but it  
was in a not synchronized I2C state. This is well described in AN868[1]  
from Analog Devices..

The MCU thoughts it was a blank EEPROM and factory settings was used,  
overwriting user settings! What the user blamed was that the machine  
sometimes restarted with factory settings, losing user settings.

In that case the solution was adding a I2C reset procedure at startup  
(some clock pulses and STOP condition as described in the Application Note).
I think this I2C bus reset procedure must be always added where there's  
a I2C bus and most probably it must be implemented by a big-bang code.


[1]  
https://www.analog.com/media/en/technical-documentation/application-notes/54305147357414AN686_0.pdf

 >> Please remember that the I2C bus is an Inter-IC bus, not to be used for
 >> connections to the outside of the device, preferably only on the same
 >> circuit board. There should be no external connectors where e.g. the
 >> shorts between the SCL and SDA could happen.
 >>
 >> All the hardWare I2C controls have been able to be restored to a
 >> sensible state with a software reset after a time-out. This includes
 >> the Atmel chips.
 >>
 >

Re: I2C Single Master: peripheral or bit banging?
On 22/11/2020 17:48, pozz wrote:
Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Sure, add that kind of a reset at startup - it also helps if you are
unlucky when restarting the chip during development.

Also make sure you write two copies of the user data to the EEPROM, so
that you can survive a crash while writing to it.

But if your board is suffering power supply glitches that are enough to
trigger the MCU brown-out, but not enough to cause a proper re-start of
the rest of the board, then /that/ is a major problem that you should be
trying to solve.

Quoted text here. Click to load it


Re: I2C Single Master: peripheral or bit banging?
Il 22/11/2020 19:44, David Brown ha scritto:
Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Yes, but it's not simple. As described in AN686, I2C EEPROMs doesn't  
have a reset, so it's impossible for the MCU to reset the EEPROM at  
startup. The only solution is to introduce dedicated hw to remove and  
reapply power supply.

This is the main reason I now prefer to use SPI EEPROM (when possible),  
because slave-select signal of the SPI bus restart automatically the  
transaction.


Quoted text here. Click to load it


Re: I2C Single Master: peripheral or bit banging?
On 23/11/2020 08:44, pozz wrote:
Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

I too prefer SPI - it is often simpler and can be much faster.

However, if it is possible for your power supply to glitch and reset the
MCU, without being turned off properly (and therefore reseting your
eeprom), you have a definite hardware problem on the board.  (Of course,
boards vary in how much effort and money you are willing to spend to get
stability, and how unstable a supply you might have.)



Re: I2C Single Master: peripheral or bit banging?
On 23/11/2020 08:44, pozz wrote:
Quoted text here. Click to load it

I hit "send" before I included a final point - the "dedicated hardware
to remove the power supply to the eeprom" is usually just a GPIO pin
from the microcontroller.  Often a GPIO pin can easily supply the
current needed for a low power device like an eeprom, so that you don't
need anything else.

Re: I2C Single Master: peripheral or bit banging?
On 11/23/20 2:44 AM, pozz wrote:
Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it
One thing to note about the I2C bus protocol, is that a High to Low
transition of the SDA line when SCL is high (a Start Bit) is supposed to
'Reset' the communication channel of every device on the bus and put it
in the mode to compare the next 8 bits as a Device Address.

Thus, if at the 'random' reset, no device is driving the SDA line high,
then as soon as the master starts a new cycle, everything is back in sync.

It is possible, that a device is either doing an ACK or a Read Cycle at
the point of reset, holding SDA low. The master just needs to cycle SCL
until SDA goes high by completing that ack or read cycle. The Read might
need up to 8 clocks. One we have SDA and SCL high, we can generate the
Start to get everyone listening.

Devices should not be holding SCL low for extended periods, so that
shouldn't be a problem (or is a problem of a different nature if you do
have an oddball infinite bus extender).

Re: I2C Single Master: peripheral or bit banging?
On 11/23/20 8:25 PM, Richard Damon wrote:
Quoted text here. Click to load it

Also a sequence of >= 10 clock pulses is supposed to get devices into  
normal function.

Unfortunately not all devices have read this spec.

I had one (LM75) which could be driven reproducible into a  
non-responding mode by a short glitch on data or clock. The only way to  
recover was a power cycle.

Site Timeline