High speed differential to single ended

Hi,

(This was posted in sci.electronics.design, but I'm reposting it here, as this might be a more fitting place).

What is the best way to convert a differential DVI signal to single ended for use in an FPGA? (My FPGA does not support differential I/O)

I was thinking high speed op amp in unity gain, but unsure of this as I'm not an analog circuit buff.

I also need to take the output ports of the FPGA and convert them to differential signals. What is the best way to do this?

Thanks, apologies for double post.

Reply to
vans
Loading thread data ...

vans schrieb:

you just cant do DVI with an FPGA without using proper DVI receiver and transmitter. .

Antti

Reply to
Antti

vans schrieb:

First, there are dedicaded differential to single ended converter ICs. Second, they wont work at thoses DVI speeds (370 Mbit/s??).

Same procedure as above. Use dedicatd tranceiver ICs. But they must support the speed (which most ICs dont).

Regards Falk

Reply to
Falk Brunner

Why Not?

I don't need to decode the TMDS data.

And, even if I did, the algorithm is pretty straightforward.

Antti wrote:

Reply to
vans

vans schrieb:

So why would you feed the DVI into a FPGA?

Regards Falk

Reply to
Falk Brunner

I need to just rearrange pixels. I don't care at all of their values.

The 10 bit TMDS link contains 10 bits, 8 for pixel data, 2 for control.

I just need to rearrange pixels from different links, for a proprietary lcd display.

The pixel clock on DVI is max 150MHz, any decent high speed op amp can easily handle that.

Falk Brunner wrote:

Reply to
vans

vans schrieb:

So go for the dedicaded converters (DVI is LVDS I guess). There are LVDS-CMOS converters.

Regards Falk

Reply to
Falk Brunner

Not really control but more like parity/encoding format bits. You probably know this but other people might be confused since the four links in an interface are R, G, B and "control".

This sounds vaguely like you just want to multiplex video streams. That would be simple, but anything else is probably more complicated than you are imagining.

But each pixel has to shoot ten bits through the wires, as you said above. That is 1.5Gbps or 750MHz worst case frequency. Both your op amps and your FPGAs will have trouble coping with that.

-- Jecel

Reply to
Jecel

Falk Brunner schrieb:

They can:

formatting link
Note that your signal loading probably is lower than the worst case spec in the datasheet, so you can reliably overclock.

I do not know much about DVI. But if it has a DC-balanced encoding you could also AC couple to a single ended input with sufficiently low voltage swing. (e.g. GTL) You only need a capacitor and a resistor per pin. Of course you lose the noise immunity of LVDS in the process.

Kolja Sulimma

Reply to
Kolja Sulimma

Please excuse an uninformed question:

HDTV has roughly 2 million pixels times 60 Hz = >120 million pixels per second Each pixel uses 24 bits. That's a traffic of roughly 3 gigabits per second.

How do you handle this? how many bits and device pins in parallel ?

Peter Alfke, getting lost in the 3-letter acronyms. ===========================================

Reply to
Peter Alfke

SATA II does 3 Gb/s and PCIE II does 5 Gb/s so with either a single pair of wires should do the trick. Ain't the current low voltage differential signalling standards great ? PS the HDMI 1.2 supports 5 Gb/s over a single pair too and probably that's what you will get for HDTV support.

Reply to
mk

Peter Alfke schrieb:

formatting link
formatting link

Regards Falk

Reply to
Falk Brunner

Reply to
Peter Alfke

One pin-pair per colour channel, sending 8->10-encoded data at up to

1650Mbps. There's one shielding wire per pair of colour channels, and a differential clock also. I've no idea what the signalling protocols look like.

Some kinds of HDTV set (as opposed to computer monitor) encrypt the data with a stream cipher, though that only has to run at the pixel rate rather than the bit rate.

For the very high resolutions (2048x1536 and above), DVI goes to two pin-pairs per channel and so up to about 4Gbps.

Tom

Reply to
Thomas Womack

Peter Alfke schrieb:

That all I found so quick.

Looks like thats all.

Right.

Regards Falk

Reply to
Falk Brunner

Hi,

Thanks for all the replies.

I don't understand how its 1.5Gbps. If we have a clock freq. of 150 MHz, we are serially sending one bit every 1 / 150MHz = ~6.7ns. So this means that there is a delay of (6.7 * 10 = 67ns) per pixel (because R, G, B are sent in parallel). Shouldn't this be 150MBps, and not 1.5Gbps?

It seems the best course is to go with an LVDS receiver, like the ones recommended above, but my only worry is if they can be fast enough

Thanks

Reply to
vans

Reply to
Peter Alfke

Hi All,

To the OP: Silicon Image manufactures ICs that receive a serial DVI stream and convert it to various other formats (656 like serial stream, 12b DDR pixel data + HSYNC + VSYNC + DE, etc.). That is probably what you need. Anyhow, DVI is transmitted over three serial links which are 8b10b encoded, so

150MHz works out to (150MHz*3links) / 10bits = 45 MB/sec. This is not enough for HD speeds, but is enough for some VESA defined resolutions.

Regards, Ljubisa Bajic

Peter Alfke wrote:

Reply to
eternal_nan

This is getting ever more confusing.

I think we agreed already that there are roughly 2 million pixel x 60 Hz, roughly 150 million pixel per second.

There are three channels, one per color, and each pixel color is represented by 8 serial bits. Encoded with 8B/10B this requires 10 bits. 150 million pixel times 10 bits = 1.5 Gigabits per second. And this obviously three times, once per color. Inside the FPGA, the pixel color is represented as 8 bits in parallel, which gets us back to a managable 150 MHz and a total of 24 chanels. The serial bit rate is too high (by a factor 2) for general purpose I/O, therefore, as I claimed, only dedicated inputs can handle such a high bitrate of 1.5 Gbps. But if you are willing to dedicate a total of

24 input pins (plus clocks), then the frequency is only 150 MHz, which any modern chip handles easily.

The total traffic is roughly 4 gigabits per second. There is no way around that, unless you use compression.

Can we all agree on this? Peter Alfke, Xilinx Applications ================

Reply to
Peter Alfke

This is getting ever more confusing.

I think we agreed already that there are roughly 2 million pixel x 60 Hz, roughly 150 million pixel per second.

There are three channels, one per color, and each pixel color is represented by 8 serial bits. Encoded with 8B/10B this requires 10 bits. 150 million pixel times 10 bits = 1.5 Gigabits per second. And this obviously three times, once per color. Inside the FPGA, the pixel color is represented as 8 bits in parallel, which gets us back to a managable 150 MHz and a total of 24 chanels. The serial bit rate is too high (by a factor 2) for general purpose I/O, therefore, as I claimed, only dedicated inputs can handle such a high bitrate of 1.5 Gbps. But if you are willing to dedicate a total of

24 input pins (plus clocks), then the frequency is only 150 MHz, which any modern chip handles easily.

The total traffic is roughly 4 gigabits per second. There is no way around that, unless you use compression.

Can we all agree on this? Peter Alfke, Xilinx Applications ================

Reply to
Peter Alfke

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.