Can anyone tell me what I need to drive a Camera Link output directly from a V4? I have tried LVCMOS25 and I can see differential signals at the outputs but at the end of a 2 meter cable I see only DC differential levels as if the signals are dampened somehow.
Lattice has a reference design available for this 7:1 source synchronous LVDS interface (also known as Channel Link, Flat Link, and Camera Link).
Lattice's 7:1 LVDS Video Interface Reference Design has been optimized for use with the LatticeECP2/M family of FPGAs. The reference design implements standard 7:1 LVDS interfaces using the LatticeECP2/M I/O structure. Transmit and receive interfaces are fully and efficiently implemented by specifically taking advantage of dedicated LVDS I/O, the generic DDR I/O interface, 2x gearing, and PLL clocking of edge and system clocks. The entire design has been tested using a 7:1 LVDS Display Demo system at speeds of 595MHz.
Detailed Information about the Reference Design and source code is available here:
The V4 should have no trouble whatsoever driving a 2m camera link (3M MDR type) cable. I currently have a design using a V2PRO30 that drives 5m of cable. The differential pins are driven by the below module.
I think my VHDL is similar. What is your FPGA editor showing when you go to those pins? Mine shows two pads, a master and a slave. When I push into these pads I don't see any options checked. Do you have additional stuff in your UCF file?
I see the same thing you do within the editor, master and slave pads. Both pads have an IO standard of LVDS_25. I have nothing else in my UCF file pertaining to these pins.
Is you sure your bank voltage being powered by 2.5V?
It is a good idea to explicitly call out IO standards in the UCF. I am a fan of determinism. The LVCMOS25 default for a differential pair
*might* have to do with these signals being on low capacitance pins (denoted "_LC_" in the pinout table) which do not support LVDS outputs. If you call out LVDS in the UCF you *should* get an error if these are the type of IO pins that your signals are attached to.
N.B. This is all conjecture, with the exception of the fact that the low capacitance pins do not support LVDS outputs.
Regards, Erik.
--
Erik Widding
President
Birger Engineering, Inc.
(mail) 100 Boylston St #1070; Boston, MA 02116
(voice) 617.695.9233
(fax) 617.695.9234
(web) http://www.birger.com
My problem is that I don't understand this cable and my lines are mirrored.
Thanks, though, for all your help. I am sure it will work when I connect the right signals.
Say, I was getting good results from a double edge clock input circuit and a single DCM generating 140 MHz (40MHz xclk). The trick was to select a shifted output depending on the fast clock.
I was unable, however, to use a double edge clock circuit on the output. The OSERDES does not have a 7x option and when you try 8x you get 8x data bits no matter how you drive the clkdiv input.
Hi Brad, (You can now spend all evening wondering where I know you from...)
I can't shed any light on your Virtex problems, but I am interested as to what leads you to bother with trying to do CameraLink with an FPGA, rather than just using the appropriate NatSemi ChannelLink chip (I'm too lazy to look up the number, but you know the one I mean.)
When everything about CameraLink is designed around those interface chips, it has always seemed to me like unnecessarily hard work to reimplement their behaviour elsewhere.
Is it cost, space or a sense of adventure which pushes you away from them in this design?
I know you addressed this to Brad, but my answer would be all of the above. I would counter your question with: if your design necessiates an FPGA, and that FPGA is capabable of performing the deserialization, why would you use the National chips? It might be more involved than letting the National chips do it for you; but the big players in the FPGA market have prebuilt modules fitted to the most standard Camera Link/Channel Link type interfaces. Sure, if you have to build your own, it can get tricky with setup/hold times, jitter, skew, and clock phasing--but what's life w/o challenges?
You already need to use an entire IO tile for each output pair, as you are using a differential IO standard. As such you have two OSERDES available for each pair. While you could use them together to get DDR mode (as you have done), I would suggest using them cascaded in SDR x7 mode. The slowest speed grade of V4 will easily support 280MHz clock, and IO, and no additional logic is required to determine pixel boundary. This will sork up to the max clock rate of the DCM, which is (IIRC) about 400MHz, or about a 57MHz pixel clock.
Regards, Erik.
--
Erik Widding
President
Birger Engineering, Inc.
(mail) 100 Boylston St #1070; Boston, MA 02116
(voice) 617.695.9233
(fax) 617.695.9234
(web) http://www.birger.com
I would be willing to give this a try again if you tell me you have had good success with SDR. I believe when I tried SDR, I needed two DCMs to generate the high frequency and was having data corruption on receiving. I might have been doing something wrong.
Another vote for National Chips if the highest frequency at SDR is 57MHz. The Camera Link standard should go to 80MHz.
Because it's easier. Which means that I can do something else, which perhaps hasn't been done by someone else already...
Hmm. There's a hint of 'when did you stop beating your wife?' to that. Maybe I can parry by saying that I see the real engineering challenge as to find the best way to do things, which doesn't necessarly mean 'gratuitously difficult'. Though I'll concede I do like to go for at least a little bit of gratuitous difficulty on every new project.
If I built my own cameralink, and the camera mysteriously went out of sync by one pixel every three weeks, I'd really regret it!
Setting up the deserilizer within an FPGA, if you can use the mfg's module (which you almost always can when it comes to cameral/channel link), takes about 10 minutes--would you consider that easy? And if by chance you have to build your own, and you understand the hardware, it doesn't take that much longer. And in the end you have the tool in your box you can pull out for the next job (reuse = no time).
I do have a large quantity of white muscle t-shirts with food stains on them!!
The real benefit, at least in my world (image processing), is that I always have an FPGA on the board which can do the interface; and my company is always looking to cut board costs since they sell 10's and sometimes 100's of thousands of units.
I believe statistically speaking it is almost impossible for an FPGA to mysteriously go out of sync--there's always a logical reason. I work with a group that tests these ports by running massive amounts of continuous data through these interfaces--they all end up being solid. I have never run into a mysterious missing pixel.
Furthermore, with the FPGA you can automatically adjust for skew in the system. The National chips can't do this. And Brad does bring up another great point about pin count.
I've used the National chips at this frequency for some time now and have not found that I need to adjust skew, however we generally use fairly expensive cabling to keep the pair-to-pair skew low. I would think the pin count is the only good reason not to use the National chips. Also make sure you have adequate ESD protection. I generally don't like running external cables directly into expensive ball-grid arrays (doubly expensive to repair).
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.