I'm considering a 5x7 LED digit module design to work with a microcontroller with these two simple ideas:
(1) Constant current drive for the LEDs, settable by a single external resistor. This will set the maximum (100%) LED current. PWM (or PFM) will be used to individually reduce the effective brightness on an LED by LED basis, referenced to this maximum setting.
(2) A microcontroller Vcc rail, from 2.5V to 5.0V.
(3) Separate LED power supply rail, which can be higher than, but not less than, the Vcc rail. The LED rail can be up to
15V, though dissipation issues will dictate that it be as low as possible. I may, in the case of RGB, want three such rails. The idea is to keep dissipation as low as reasonable, but to allow flexibility, too. (2 LEDs in series, for example, or blue vs red, etc.)(4) Muxing is done with 5x, placing 7 LEDs per row, 5 rows, and with 5x the average current. Since I may use 20mA LEDs, this means 100mA peak LED current (as set by the external resistor.)
(5) 5% variation in LED current across active LEDs in a row.
Variations in LED V vs I yield unacceptable differences in brightness (noticeable) when operated in parallel.
I developed a tentative schematic (without slew rate limits), but I'm using 64 BJTs to manage 35 LEDs. It works and none of the transistors deal with more than Ic=100mA, neither the row nor column ones. But that's a lot of BJTs and I'm betting I'm missing something terribly obvious. (I'm using cheap 0.3 and
0.4 cent BJTs, so it's not expensive. Everything else is what costs money.)These 5x7 displays will be about 3" high, by the way, and I'm using the 3d printer to make the black grids for mounting 5mm LEDs.
Here's a portion that gets my mental state (bad as it may be) across:
What am I missing about simplifying this without putting LEDs in parallel to each other during muxing?
Jon