Yes, using external I/O buffer for the I/O of the bus and connecting them on the V/IO of the PCI bus.
The only compliance problem might be the clock since you must have a single load on it. Using a zero delay buffer 5 v compliant that should be possible.
No. We had that on this newsgroup on a regular basis. The pci standard states explicitly that there may be no discrete components connected to the signals. You can not be fully compliant to 5V PCI with a V2P.
It is general consensus to ignore that rule but you better not use the PCI logo to avoid cease and desist letters of your competitors.
You can use bus switches to limit voltage allowing 5V operation. Technically by PCI spec they aren't allowed but we do on our development boards as do a number of our competitors and have never seen a problem.
John Adair Enterpoint Ltd. - Home of Raggedstone1. The Low Cost FPGA Development Board.
What's their definition of a discrete component..? I suspect they are talking about resistors etc. Does it actually say that each signal must go to a single chip ? If not, then I see no problems with signals going to different (buffer) chips.
That is bad enough. Why should a microswitch be a sginificantly lower load to a signal than an FPGA or ASIC? An IDT quickswitch can have up to
7pF capacitance.
"Section 4.4.3.4 Signal Loading Shared PCI signals must be limited to one load on the expansion board. [...] It is specifically a violation of this specification for expansion boards to:
Attach an expansion ROM directly (or via bus transceivers) on any PCI pin.
Attach two or more PCU devices on an expansion board [...]
Attach any logic [..] that "snoops" PCI pins.
Use PCI component sets that place more than one load on each PCI pin: e.g. separate address and data path components.
Use a PCI component that hast more than 10pF capacitance per pin.
Attach any pull-up resistors or other discrete devices to the PCI signals, unless thay are placed *behind* a PCI-to_PCI bridge."
Does Xilinx marketing pay you a bounty each time you make such an absurd claim on comp.arch.fpga?
The PCI specs are written to guarantee interoperability across best/worst case device/backplane loading and topologies.
That the required bus loading excludes FPGA vendors who haven't improved their Cin specs since their first family of 20+ years ago is certainly not the fault of the standards.
I am paid (in very small part) to watch this newsgroup, and comment.
As for 'absurd claim', it seems you have never served on a standards body, as your comment has me laughing.
A 'standard' serves the interests of the companies that promoted it (as well as providing a service to the industry). There is active work by any standards committee to exclude/disadvantage/hobble as many competitors as possible (legally).
I used to call it "making every participant equally disadvantaged in order to level the playing field as much as possible prior to approval."
A great example of this is when an ASIC is developed, and the company that has it, promotes it as a standard. In the process of getting it approved, it is inevitable that the standard will require a respin of the silicon in order that all vendors have a chance to participate. The original vendor must also respin their chip, as they are not "standard" until they do so.
I have seen this multiple times in my 13 years of sitting on multiple ANSI/ATIS/IEEE/IEC standards committees.
Since you don't know this, I suggest you go and volunteer to chair a committee, and learn something about the real world.
I already know plenty about the real world, thanks.
Your suggestion that the bus loading specs of PCI are there specifically to exclude FPGA vendors was, is, and remains, absurd.
Almost as absurd as your previous posting tactic of repeatedly claiming that you "meet all specs and standards" in spite of hard facts to the contrary.
I did not say YOU were absurd, I said your POST was absurd.
Perhaps some day you will understand the difference.
You can't handle the facts; when forced to face them, you resort to childish name calling and tantrums.
Oh, and I'm still waiting for your promised apology from this charming post you made in August '04, where you wrote [1]:
Surely you remember, that's the thread where you attacked and ridiculed me for pointing out that the S3 DCI overhead of up to 2W per chip might be of concern in a small S3 design.
Funny to read that thread again now, when you're claiming that a 5W advantage in the biggest, baddest V4 is the greatest thing since sliced bread.
Strangely, when I pointed out exactly what you had omitted, distorted, and mis-represented [2], you failed to cough up an apology...
DCI parallel power is fact. Multiply the number of DCI by the power, and you get an answer.
So what? Our static power is demostratably less than other 90nm "solutions", and our dynamic power is similar. It amounts to 1 to 5 watts. Could you program the DCI in parallel termination mode and eat up the advantage? Sure! But then, I would not need hundreds of resistors, either. Did the job, with the same power, but with less resistors. You choose how to burn the power, you choose how to spend the power advantage. Power advantage allows use of
Same design, less power. Different design, different power.
The advantages are clear. There is no apology required.
I have to weigh in on Austin's side on this one. A parallel termination is going to burn a set amount of power for a given impedance. Nothing you can do about it and still have that termination there. Period.
So the choice is do you dissipate that power on-chip to save board space and parts count, or do you dissipate it in resistors on the board (that if you have enough lines, frankly, may not fit and still be close enough to the chip to do a lot of good).
On designs where we are going to be burning a lot of dynamic power, I encourage my customers to not use the DCIs if they are concerned about the temperature of the die, which can become a real concern with 400 MHz clocks clocking a pretty full device.
The point is, the power required for the terminations is not something that is variable, so it isn't fair to lump it in with the power dissipation of the rest of the FPGA. What Xilinx does with the DCI is give you alternatives to the resistor farm surrounding devices with lots of terminated lines coming into it. Options is a good thing, and it doesn't mean you have to do it that way.
The subject was your repeated habit of posting the same misleading, demonstrably false information about your I/O performance in multiple threads across multiple years.
I used that particular "apologize or else" thread as an example of your outrageous newsgroup postings.
An apology is long overdue.
When I pointed out in detail exactly what 'facts' in your attack posts were misleading, innacurate, and just plain wrong, you failed to deliver on the promised apology.
If Xilinx management really thinks your over-the-top, attack-dog postings are winning them any new customers, or improving their credibility, they are far more out of touch than this "novice" has ever been.
For examples of how to properly respond to a question about Xilinx PCI compliance, without invoking the Great Specification Conspiracy Theory, Google Steven Knapp's and Eric Crabill's old posts on PCI compliance.
Brian
p.s.
Plus the per bank overhead of ~200 mW/bank in V2 and S3
Darn, you "forgot" that one again, didn't you?
Maybe you can remind the S/W weenies to actually review the outstanding three year old CR's before the next release.
Austin's posts aren't typically "over-the-top, attack-dog postings." When his buttons get pushed he tends to react - a human trait. You're just more effective at pushing his buttons than most. His beliefs may be skewed by what he's come to believe - another human trait - resulting in declarations of fact rather than "lets ponder this more with this new information I'm giving you." Along those lines, I love the riddle that leaves most americans perplexed but non-Americans laughing: "Q: What does an American do with a question? A: He answers it."
I personally try to see the multiple sides of an issue and understand where someone else might be getting their (mis/pre)conceptions from. But not everyone takes that generic perspective. There are things I know, darnit, and those who don't know what I know are wrong when they say otherwise.
Conspiracy theory aside, have you been involved with standards development? I haven't but I've seen some rather strange stuff over the years with my exposure to Telecom standards in addition to the usual electrical stuff like PCI. I don't doubt that there are compromises made that favor existing silicon because the owners of that silicon (or those transmission systems) want to revamp less of their technology. A good compromise is reached when nobody's happy.
Each bank requires two reference resistors. The original power in these was not documented, nor included in the power estimator.
FreezeDCI solved a problem with the jitter introduced by amplitude shift due to DCI. Freezing it also stopped the reference resistor search, which could (randomly) increase the ref resistor power (in V2). This has since been fixed in later families so that freeze is done better.
as above
8 banks, time 200 mW = 1.6 watts. At 1.5 volts that would be one ampere. A little exageration here? At 3.3 volts, that would be less amperes?
I can appreciate your being bitten by DCI in your application. In its first appearance there were some issues (all of which mentioned above) which led to some problems with specific applications (primarily wide buses with extremely critical timing using HSTL or SSTL parallel standards). Standards which crossed a bank also had issues, as the controllers were independent (one for each bank, not synchronized).
The latest family DCI is improved in these areas. But, the power is still there.
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.