I think I have a sound understanding of how to use a transistor to "get more current" from a microcontroller pin, but I'd just like to run my reasoning past you to see if any of it's flawed.
Let's say I have a micrcontroller pin that can supply 25 mA, but I want to use it to power a lightbulb which will draw 250 mA.
I'll use a transistor as a "switch" to achieve this. When the "switch" is on, I want there to be a short circuit from the collector to the emitter. When the "switch" is off, I want there to be a broken circuit from the collector to the emitter.
I want to set the circuit up so that the transistor is always either in cut-off mode or in saturation mode -- I don't want to be in the "active region" at all.
- So I get a transistor, and I connect the microcontroller pin to a resistor that goes to the base of the transistor.
- I connect the emitter of the transistor directly to ground.
- I connect the collector of the resistor to the lightbulb which goes directly to Vcc.
Now I just need to pick the resistor value that goes into the base of the transistor:
- I get the Beta of the transistor (which for the TIP121 device is
- I decide on a maximum current that will flow into the collector. (I'll pick 260 mA in the case of my lightbulb).
- Now I divide the collector current by Beta to figure out what base current I need to put the transistor in saturation. (260mA / 1000 ==
- Now I consider the voltage applied by the micrcontroller pin, which is 5 V. From this 5 V, I subtract the Vbe voltage drop of the transistor. (Which for the TIP121 device is 1.4 volts).
- So now I know that I need a resistor that will allow at least 260 microamperes to flow when there's 3.6 volts applied to it.
R = V / I = 3.6 / 260 / 10e-6 = 14 kilohms
So, am I right in thinking that the maximum value for my base resistor is 14 kOhms, and that I'd be more than safe to use a 12k resistor?
If any of my reasoning is a bit wishy-washy then please point it out to me!