Hi, I am trying to implement a system for video segmentation on xilinx FPGAs. The application need raw RGB stream data from a video camera and process it on FPGAs in real-time. Since using 24 pins(8 bits for each color) is not a wise way to hook up cemera with an FPGA, I was wondering what is usually done in such cases. About the camera, I will get it from a company, and they promise they could provide any type of camera I will need.
24 I/O pins is not too much for most FPGA's that have the guts to do a reasonable job of processing video. What you don't want is direct connections to the camera. Many modern industrial cameras use the Camera Link standard, which provides high-speed LVDS signals on 5 twisted pairs. The receiving hardware is usually a National DS90CR288, which comes out with parallel data, control signals and clock that can be used by any FPGA. Newer FPGA's can probably take the high-speed serial data directly, but then I would still suggest adding an external LVDS buffer to avoid blowing out the more expensive FPGA if you get ESD on the line. Check out Pulnix, Dalsa, Cohu websites for some Camera Link examples and links to the Camera Link specification.
If you want to start with an analog RGB camera instead, there are some good I.C.'s available for LCD panels that do a good job of digitizing the equivalent of P.C. video. The Analog Devices AD9888 has high performance, but will use more pins of the FPGA at the highest pixel rates. If you don't need really high performance video (high pixel clock rate) I would suggest starting with a digital video camera instead.
Some older digital video cameras used parallel LVDS or RS422 and very fat cables to connect to a framegrabber. This method has mostly been replaced with Camera Link, and also USB or FireWire in lower performance cameras.
For something more like TV resolution you could get a very inexpensive NTSC, PAL or SECAM analog camera and digitize with something like the Philips SAA7111A.
Thank you guys! that is very helpful for me. I just wanna ask one last question: After processing the input video stream, I got a binary outputs to be displayed in a either a monitor or sent back to a computer for real-time supervision, any recommendations on how to implement this?
Again this depends on the video resolution and update rates. If you have a high-speed data path to the computer, it's fairly easy to get the data on the computer monitor (just a small matter of software). In fact if you had something like a bus-mastering PCI interface you can write the image directly to the screen for most VGA displays. If your device is located away from the computer I'd look into just a standard network interface like Ethernet as long as it gives you the bandwidth you need.
For a local monitor (no computer involved) you can use a RAM/DAC if you want to get fancy. I've used the ADV7160 / ADV7162 (Analog Devices) with good results on PC-style analog video monitors. Depending on image quality and pixel rate there may be much simpler and cheaper solutions for analog RGB video.
If you want to use a TV-style monitor look into the Philips SAA7125 video encoder. It's small and relatively easy to use, but you will need to set a bunch of internal I2C registers to set it up.
Finally if this is a one-up project (research?) I'd suggest re- creating standard camera video like you take in and then using on off-the-shelf framegrabber card for computer connection and video display.