Hi guys,
As most peple here are likely aware, if you examine the coax cable for a passive oscilloscope probe you find that the inner conductor is made of a highly resistive wire -- something like nichrome, and around
50-200ohms per meter. (For some background on this, check out this article:What's not clear to me, though, is how one analytically determines the desired resistance of their coax in such a situation, given knowledge of the source and termination impedances of the cable. The magazine article above mentions that, at least at Tektronix, John Kobbe came up with the idea... and if you then dig into some of the Tek archives, you can find his reminisces about doing so, where he says something along the lines of, "it occurred to me that lossy coax would work well here [to greatly extend the bandwidth of a passive probe], so I just calculated what the appropriate resistance would be, ran down to the stock room, got some and tried it out... it worked great, and Howard Vollum himself took me to dinner that night as a reward!" (Just kidding on that last part...)
Anyway. OK, it worked... cool! But... ummm... does anyone happen to have some pointers on how he might have gone about performing that little calculation? I'm pretty well-versed in transmission line theory and imaginary characteristic impedances don't scare me. :-) I'd prefer to read up on a little theory here than just perform the "cut and try" approach in a simulator that the article above uses.
Thanks,
---Joel