Currently-Available Highest-Quality Linear PCM Video?

Hi:

What are the sample rates and picture resolution [in pixels X pixels] of the professional progressive [non-interlaced] linear-PCM video format used today? I beleive linear-PCM video signals are used in professional studios.

Thanks,

Radium

Reply to
Radium
Loading thread data ...

Radium wrote: > Hi: >

Start here

formatting link

and here

formatting link

and here

formatting link

There are MANY more.

GG

Reply to
stratus46

If you're talking about standard-definition television, and the case of standard sampling formats for the conversion and storage of "analog" video into digital form, the most common international standard today is probably CCIR-601, which uses a common 13.5 MHz sampling rate for both

525/60 and 625/50 video systems; this results in image formats (not "resolutions," please) of 720 pixels x 480 lines for the former and 720 x 576 for the latter. I don't know what you mean by bringing "linear-PCM" into a question of sampling rates and image formats.

Bob M.

Reply to
Bob Myers

What equation did you use to get the numbers 720, 480, and 576?

Linear PCM is uncompressed PCM. Thats what I am talking about. Sampling rate must be at least twice the highest frequency in the signal. Due to physical conditions, it is safe to makes the sample rate at least 2.5x the highest frequency signal.

In NTSC, the horizontal frequency is 15.734 kHz, the vertical frequency is 60 Hz, and the color subcarrier frequency is 3.579545 MHz. The means that the horizontal sample rate must be 39.335 khz or higher, the vertical sample rate must be at least 150 hz, and the color subcarrier sample rate must be no less than 8.9488625 mhz.

What is the pixel X pixel resolution -- or "format" if you wish -- of today's first-class video signal? Surely it would have to be more than

720 X 576. My monitor is displaying a pixel X pixel -- or "screen area"

-- of 1280 X 1024 with 32-bit color. I am not sure of the frequencies of my monitor.

Also, is there supposed to be a special difference between the first number and the second number [such as 1280 X 1024 or 720 X 576]?

Thanks,

Radium

Reply to
Radium

formatting link

CCIR-601 uses interlaced -- not progressive -- signals. I like progressive and dislike interlaced.

Reply to
Radium
2.5x > the highest frequency signal. >

You really need to go look things up and not argue about things you apparently know little to nothing about. Bob Myers told you exactly right about SD TV. You can go look this up- the sample rate is 858 x h rate = 13.5MHz for luma. The 720 is the active number of samples out of the 858. That 'dead' time is to allow the CRT based monitors time to retrace the Horizontal. The 480 line is computer talk for 525 line system of which 483 are active. The remaining lines are to allow Vertical retrace of a CRT monitor. If you think about it, LCD and Plasma do not require retrace since there is no scanning, just counting off samples. HDTV works in a similar fashion but the numbers are all different. Try ATSC.ORG. The monitor pixel ratios are often 4:3 or 16:9 to make square pixels but none of that is etched in stone. Rectangular pixels are often used and just re-map the square into rectangular. Now go study.

GG

Reply to
stratus46

SDTV is not "first class"

What about the color subcarrier sample rate? The horizontal frequency sample rate?

That is why plasma and LCD are better than CRT. In addition, plasma is better than LCD.

IIRC, HDTV is a tad closer to "first class" than SDTV. Again, I could be wrong.

Reply to
Radium

You might be surprised if you saw a serial digital feed on a broadcast monitor. Many folks would find it to be totally satisfactory -- if they ever got to see it.

There is no subcarrier for component digital. You didn't look it up. The subcarrier is introduced when the component is encoded into composite. This does not happen with DVD or SD Digital TV. The horizontal 'sample rate' is a continuous 13.5 MHz with a new line starting on every 858th sample and the vertical 262.5 X 858 samples. The 2 chroma difference channels are each sampled 429 x H for another

858 time periods for a total data rate of 27MHz sample rate. Bump those numbers up to 150MHz for HD.

That has nothing to do with why LCD or Plasma is 'better' than a CRT. You will find a good number people who think the CRT is a better image even in HD. Much of what is 'bad' about the CRT is the support electonics, mainly poor power supply regulation.

GG again

Reply to
stratus46

AFAIK, it maybe "satisfactory" but it is not the best quality currently available.

Is HD is best quality currently available? How much is the pixel X pixel screen area in HD? My LCD computer screen is 1280 X 1024 pixels.

I prefer "first class" uncompressed linear-PCM video signal viewed through a "first class" plasma screen.

Both LCD and plasma are more resistant to EMI/RFI than CRTs. Plamsa offers better clarity than LCD or CRT.

Reply to
Radium

Radium wrote: > snipped-for-privacy@yahoo.com wrote: > > You might be surprised if you saw a serial digital feed on a broadcast > > monitor. Many folks would find it to be totally satisfactory -- if they > > ever got to see it. >

Quite simply put, you're not going to get it now or in the near future. The best you can get today is MPEG2 HDTV over the air with an ATSC receiver delivering 19.34 mega BITS/second. That's compressed between

75 and 80:1 and that's the BEST available. Remember HDTV pushes out 150 Mbytes/second and those are not 8 bit but 10 bit so your 1.5Gbits gets dropped to 19.34Mbits. But, I just got done watching NCIS in HD and it really looked quite good. So go spend some money on a 1920x1080 native resolution set with ATSC, put up an antenna and have at it. I've been watching HD for nearly 3 years and have 2 computers to record HD OTA feeds. They work very well and you don't need the latest whiz-bang computer to do it. A Sempron 2500 will do just fine.

More of the support electronics issues but I agree that the CRT is not what I want. Non-CRT sets all suffer from math rounding error noise when converting the digital values to PWM to get variable brightness. CRTs with all their faults do not have that issue.

GG

Reply to
stratus46

No equation; that's what the standard defines. The 720 is the number of active samples per line; if you divide the period of one horizontal scan line up using a 13.5 MHz sampling rate, you will get 858 samples per line (in the case of the standard "NTSC" line rate, 15.73426 kHz). Of these, 720 are "active" (contain information corresponding to the actual displayed image). The 525/60 scan format also has about 484 lines per frame of active video (i.e., what's left - 525 minus 484 lines - is used in vertical blanking), and CCIR-601 decided to use a standard format of 480 lines just to round it to a convenient number. Similarly, there are about 576 active lines in the

625/50 scan formats.

Which has absolutely nothing to do with how the data is encoded, which is what "PCM" is all about - hence I have no idea why you are bringing that term up in a question regarding sampling rates. Note that the 13.5 MHz rate easily meets the Nyquist requirement of being >2X the bandwidth (technically, it's bandwidth, not "the highest frequency in the signal) of the video (note that U.S.-standard video occupies roughly

5 MHz of a 6 MHz channel).

Standard definition video - i.e., anything other than high-definition TV, which has different standard formats - is never any better than about 720 x 480 pixels/lines for the U.S. 525/60 standard format, or 720 x 576 for the "European" 625/50 format. The actual effective resolution, in the proper sense of that term (the degree to which detail can actually be resolved in the displayed image) is generally a good deal less than this. Yes, computer formats contain considerably more pixels than anything SDTV is capable of.

Standard HDTV formats are 1280 x 720 pixels and 1920 x 1080 pixels.

The way it's normally stated in PC usage, the first number is the number of pixels per line, while the second is the number of lines per frame. Traditional TV practice has been just the opposite - give the number of lines first, then the number of "pixels" - but actually, until the advent of digital television there really wasn't any such thing as "pixels" in the TV engineer's vocabulary.

Bob M.

Reply to
Bob Myers

In that case, what do you think "first class" means? You asked about video as it appears in professional studios - unless you're talking about HDTV broadcast, SDTV is all there is.

Those phrases are nonsense. There can be different sample rates used for the "chrominance" components vs. the luminance (i.e., go look up "4:2:2" and "4:2:0" sampling as opposed to "4:4:4"), but you would never refer to this as the "color subcarrier sample rate."

Actually, this has very little to do with the relative performance of these technologies. And before we start talking about which is "better," you'd have to ask what "better" means. "Better" in what aspect?

Since you have yet to tell anyone just what you are thinking of when you say "first class," it's impossible to respond any further. Do you mean the digital cinema standards?

Bob M.

Reply to
Bob Myers

Yes, it IS, in terms of entertainment video. Any "television" type programming you are going to encounter is either SDTV or HDTV; the vast majority is SDTV, and you would be surprised at just how good a 720 x 480 image can look on the right display.

You still don't seem to have any clue at all how "linear PCM" plays into all this...

No, those technologies are more resistant to MAGNETIC interference than the CRT; this is not "EMI" or "RFI." It is also impossible that either plasma or the LCD would be superior vs. the other in terms of "clarity," if by that you mean delivered image resolution. Both are fixed-format technologies, so when driven at their native format, each pixel of the image is as clear as it's ever going to get.

Bob M.

Reply to
Bob Myers

Correct, but not relevant to the question you asked, which concerned standard sampling rates and the resulting image formats. If you're talking about sampling video, you are talking about converting an analog video signal to digital form - and ALL analog standard-definition broadcast video standards use interlaced scanning.

There are similar standards for progressive-scanned video, but since your question clearly was with respect to broadcast studio practice, they were also not relevant to the question.

Since you seem to be able to find Wikipedia, why aren't you looking into your questions THERE before asking them here?

Bob M.

Reply to
Bob Myers

He can't troll the wiki.

--
Service to my country? Been there, Done that, and I\'ve got my DD214 to
prove it.
 Click to see the full signature
Reply to
Michael A. Terrell

I asked a question about linear PCM video 3 years ago. Here was my response:

formatting link

Quotes from the above link:

"There is uncompressed PCM for video. The data rate is 270 Mbit/sec for

standard definition, 525 line, 60 field interlaced. The computer folks refer to this as 480i. For Hi def, the data rate bumps up to 1.5 Gbit/sec for 1920x1080 interlaced. Looks danged fine,too. This stuff is only seen in studios and post production facilities. It makes going to work fun. "

My question is what are the sample rate and color-resolution for the "Hi def" mentioned above with 1920 X 1080? And why is it interlaced? I want it progressive.

Linear-PCM is uncompressed digital info. If linear-pcm is used for audio [e.g. WAV files], then why not for video??

Here is my insane type of premium video:

Linear PCM video [with at least 320-bit color resolution, at least

109,200x100,800 pixel progressive [non-interlaced] picture resolution, with a sample rate of at least 1,350 THz sampling rate viewed via a plasma screen with at least 1,000x the capabilities of the aforementioned values.
Reply to
Radium

formatting link

Note that the sentence "there is uncompressed PCM for video" does not equate to "all digital video is PCM" or "PCM is required for high-quality digital video." PCM is simply one possible encoding and transmission scheme, nothing more - and it is not the one used in most digital video systems.

The sample rate can be determined from the pixel format (in this case, 1920 x 1080), the frame or field rate (60 Hz, typically, in the U.S.), and the amount of overhead time required for horizontal and vertical blanking. If you didn't need ANY blanking time, then the minimum sample rate is simply

1920 pixels/line x 1080 lines/frame x 30 frames/sec (it's interlaced)

Note that if you work the units out as well, it comes out in terms of pixels/second, exactly as it should.

It can be; it simply doesn't HAVE to be, so there is no need to be dragging that question in at this point.

You've got that right - it's insane. Here's why:

OK, now I see what you mean by "color resolution" - you are talking about what's more commonly referred to as "color depth" or "dynamic range." 320 bits/pixel (presumably, something over 100 bits/color) is absurd; there is no display device that can provide this range, nor can the human eye deal with it. Somewhere around 10-12 bits/color, properly encoded, is about the maximum required, and most systems reduce the effective data rate by limiting the spatial resolution in the chroma channel (i.e., you don't really get as many bits PER PIXEL for color as you think you need).

Again, absurd numbers. The eye cannot resolve detail beyond a certain point (approx. 60 black/white cycles per visual degree is a good rule of thumb for the maximum), and anything above that is a waste. But this means that you can't just concern yourself with the number of pixels in the image - the image size as displayed to the viewer and the expected viewing distance also come into play.

Sample rate is never a system requirement, except in terms of a maximum permissible sample rate to fit within system bandwidth constraints. The sample rate REQUIRED for a given pixel format and frame rate is driven by those parameters, and then you just see whether or not it's going to fit in the available bandwidth.

As usual, it seems you haven't learned anything at all about the field you're trolling in before making up absurdities.

Bob M.

Reply to
Bob Myers

formatting link

I am well aware of that. Most digital video uses -- MPEG-layer or some other form of compression -- not linear PCM. I don't know why?

Why isn't linear-PCM used in video?

Are you sure that isn't that the bit-rate? There is a world of difference between bit-rate and sample-rate.

For example, CD audio has a sample rate of 44,100 hz but a bit rate of

1,411,200 bps.

Linear-PCM doesn't have to be used but what harm is caused by using it?

Reply to
Radium

Hi Radium!

Depends to the CD.... not every Compact Disc Audio is full 16bit.

1-bit CD-Audio Technology makes theoretically a better use of the real existing Resolution on the CD, processing only what is really there.

Best Regards,

Daniel Mandic

P.S.: But 1,411,200 bps can happen. ;-)

Reply to
Daniel Mandic

Daniel Mandic wrote: > Radium wrote: >

What??? Then it isn't CD audio.

What????

Links please.

GG

Reply to
stratus46

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.