Question of TV technology, if anyone can answer two questions

This has to do with why they do what they do. I consider myself fairly knowedgable on that subject, but noone knows everytthing so here goes.

We had about a 1991 or 1992 Sony RPTV come through. This thing was obviously low hours and I got that "tick" when I took the screws out of the back. It may have never been serviced, and where it comes from, I highly suspect that. This thing has strong CRTs and operated perfectly except it needed a coolant job, actually being this kind of Sony, more of a CRT face scraping job. As I had the CRTs out I called the boss over. He is in his fifties and used to be a technician, but once you start your own business that will keep you too busy to stay a technician.

Actually he flunked a test in school he told me. He had the circuit all designed and the teacher drew a candle on it. He had forgotten the filaments ! I joked that he was ready for solid state.

So I pull him aside as he's walking by, because I saw the pincushioning nagnets. I pointed at them and said "You recognize these ?". He didn't remember, I told him "That's the pincushion circuit, remember ?". Then he remembered. That is the old way.

Of course we all know that you cannot use that method on a color CRT, but projection TVs have monochrome CRTs.

So why don't they stick with a tried and true method for this application rather than overpushing the convergence circuit to the point where it has become the most common RPTV fault ?

And further, the other question, why don't they use electrostatic deflection ? At least for the horizontal. I am pretty sure that today's transistors would have alot of trouble doing 1080i, if they ever can at all, because the yoke is inductive. Start kicking H up to

67.2 Khz, it is no fun. But with a non inductive load, wouldn't the scan rate changes be easier to manage ? They do it in spades in scopes.

To maintain geometry, and even deal with convergence, the drive circuitry to three sets of horizontal amps would be no more complex that what drives today's digital convergence circuits.

Now there is one factor that might shoot this down other than cost. Actually I don't think the cost would be all the great, and then all they need to deal with is vertical.

If they did vertical electrostatically, there would be no convergence circuit at all, it could all be done by the main sweep circuits. But the one problem there might be with that is this.

Electrostatic deflection might be more affected by beam current changes. I do not know enough CRT technology to know something of that nature. However, they have already found out that steady deflection along with precise HV regulation does not work. The raster will get smaller because beam density affects deflection sensitivity. That's why there are seperate resistors going to each CRT anode in a high voltage splitter. That is also why they have abandoned extremely tight HV regulation in favor of more precise and modulated control of the deflection. They have integrated HV level with beam current, and also use it to control the vertical drive now.

So why can't they use electrostatic deflection and deal with these problems just like they do now. The only difference is that there would be no great current flowing. But for the capacitance of the deflection plates, which should be well easier to deal with than the inductance of a yoke, why don't they do it ?

I don't think the deflection sensitivity issue is all that big. When I was in my twenties I had a shop. Guy comes wants to work cheap and learn. During one of the "lessons" I capacitively couple a video signal to the Z axis input, and sync the horizontal and fed the vertical waveform to the vertical. It did not seem to have a problem with intensity modulation. Also, the scope circuits I have seen, admittedly older ones, did not include anything elaborate to deal with it. Therefore my assumption is that it is no more a problem than in a magnetically deflected CRT.

What I am here for is to have holes shot in my theory. There must be a reason, and money is no longer it. Many techs still recommend CRT based RPTVs. There will still be a demand. But they could do 1080p !

JURB

Reply to
ZZactly
Loading thread data ...

snipped-for-privacy@aol.com wrote: > This has to do with why they do what they do. I consider myself fairly > knowedgable on that subject, but noone knows everytthing so here goes.

1080p ! >

I have two questions.

Why would anyone give 2 hoots about CRTs? The optics are crap. There isn't enough light even from 3 mono CRTs. Convergence and geometry is mediocre at best. Stability is poor and as you mentioned, reliability is poor.

Question 2. Have you looked at the newer display types? Even the 'worst' one will blow CRTs out of the water.

We have a 4 year old Samsung DLP that has had one lamp fail and a color wheel in 9100 hrs operation. Total parts cost < $200. Total repair time < 2 hours. Its brighter and clearer than ANY CRT based RPTV set ever. The 'convergence' is flawless and the geometry error is unmeasurable. Plus it has a DVI input for 1:1 pixel mapping from a modest PC that doubles as a high def video recorder. And it weighs less. And it uses less power. And it has a smaller footprint. And it has no fluids to leak. And it fits into a minivan. And it has no high voltage to attract dust. And it makes less RFI.

OK, I didn't convince you. I, however, am done with CRTs and judging by what is in the stores, I'm not alone.

GG

Reply to
stratus46

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.