The only issue to watch out for is DC wander, although it shouldn't be a _huge_ isue at the speeds you mention, but it will be at least a minor one. Capacitive coupling can get you around that, but then you have to define the data run length, if that's even possible. Some driver / receiver pairs have adaptive DC wander correction built in.
i've never used RS422, but 640-800 mb/s at 20 meters seems excessive. What kind of video is this?
I would go with Ethernet. It keeps you entirely in the digital domain and allows processing using commodity hardware. A fiber option over 1 Gbps or 10 Gbps would elimate problems with coppper, but there are twisted pair options up to 10Gbp/s. 100Gbp/s should be out before long.
Note that there are computers that already have the form-factor of a flat-panel with the motherboard embedded, with integrated Ethernet port. Depending on the width of the edge around the display, it would be conceivable that you could simply put these in a matrix, add a few Ethernet switches, add high-end controlling PC (or PC's), and have solution with no custom design.
I think you misread my post - Each channel is only 10mbit/sec - the incoming DVI video is split by the FPGA based controller into 80 streams to feed to the LED matrix panel controllers. Each stream contains a vertical strip section of the image. Each stream feeds 6 panels, and each panel selects the required 1 of 6 section of their feed to display. Data rate is determined by highest rate that can be read using minimal hardware on the panels (LPC2103, SPI port, with clock gaps for block sync.)
Something I may not have made clear - the reason for 80 streams is the physical layout - a 3 x 20 metre wall, so it's not about moving the signal from A to B, but from a central controller to the various points on the 3x20m array, and keeping the receiving ends as cheap and simple as possible.
Although the drivers have common mode loops to deal with wander, at the end of a long cable there will inevitably be some remaining. I have used some devices (some years ago and I'll have to dig out the schematics) that had wander correction in the receiver (this is _very_ common in ethernet physical layer devices, incidentally).
Wander is not usually a major issue over time, but it can be a source of interesting data errors if not catered for.
For a few $US per part, you can not only get the USB controller, but integrated micro, with 2K or more of RAM, watchdog timers, D/A converters, etc.. You could use either Full Speed USB at 12mb/s or Hi-Speed at 480 mb/s. You could operate the controllers in a star configuration could have one star for one half of the display and other star for the other half (you get 127 devices per host controller).
USB is bi-directional of course, so your controller could be easily notified of faults, like burn LED's, etc. A very attractive feature, however, is the per node programmabiliy. The display controllability increases drammatically, as you would be able to download code to each single device. You've already settled on an FPGA, but had you used a commodity PC motherboard with DVI controller built in, you could have used that. The image processing power on the PC is hard to beat for price/performance ratio (yes, I'm a PC w**re). Then of course, there is the benefit of sending image wirelessly, so, if for example the display is in some place where it is inconvenient to run cable, you could stream the images over Wi-Fi from a video source to the controlling PC, using point-to-point encryption. In an extreme situation, you could actually have the video source directly address wireless USB chips for each panel section.
Total bandwidth is about 800mbits/sec, which is why the feed is split 80 ways... USB low-level hardware as a simple point-point transport might be attractive from the receiver point of view, but the transmit end is probably rather more involved....
We will have this ability via a side-channel data slot on the data from the controller and a low-bandwith return path - possibly just a pollable acknowledge signal.
No I couldn't. the only sensible way you can stream 800mbits/sec of anything out of a PC is via the DVI port, which is what we are doing to generate the image data. The DVI stream is then getting sliced up into the multiple streams by the FPGA. OK you may be able to design a PCI card to do it but this is a 1-off project on a limited timescale so it needs to be kept as simple as possible. I can but an off-the shelf FPGA card with DVI-in and the right memery, so all I need add is a serial demux and drivers.
We don;t need any processing - just Then of course, there
Since when could wifi manage 800mbits/sec, sustained?
In the really early days of digital television (around 1974) I had to build a very similar system as a trainee at the BBC Research Department (Kingswood Warren).
It was not very successful. It used 74S TTL and differential line drivers on multi-pair telephone wire. This was fine. The problem was dispersion. One pair was used for each bit of the digital composite video signal (8 bits), and another pair for the clock.
This meant that when transmitting an image with black on one side and white on the other with a smooth transition in between, the most significant bit would only change slowly, while the least significant bit changed very fast. The propagation velocity of signals in telephone wire is frequency dependent. This meant that over the distances involved (I think it was 50m at a clock frequency of 13.6MHz) it was not possible to get the timing accurate enough and the result was speckles on the image.
Serialising overcomes this because the ratio between highest and lowest frequencies transmitted is much lower, even though the maximum frequency is much higher.