PCI compliance ?

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View

I have to design a board with a PCI interface which shall be compliant with
a larg range of PCI versions !
3.3V 32bits / 33Mhz
5V 32bits / 33Mhz
3.3V 64 bits / 66 Mhz

The board should use V2P xilinx FPGA so what bothers me is the 3.3V and 5V
Is there a simple solution to achieve this ?



Re: PCI compliance ?
Yes, using external I/O buffer for the I/O of the bus and connecting
them on the V/IO of the PCI bus.

The only compliance problem might be the clock since you must have a
single load on it. Using a zero delay buffer 5 v compliant that should
be possible.

Re: PCI compliance ?
Quoted text here. Click to load it
No. We had that on this newsgroup on a regular basis.
The pci standard states explicitly that there may be no discrete
components connected to the signals.
You can not be fully compliant to 5V PCI with a V2P.

It is general consensus to ignore that rule but you better not use the
PCI logo to avoid cease and desist letters of your competitors.

Kolja Sulima

Re: PCI compliance ?

Quoted text here. Click to load it

What's their definition of a discrete component..? I suspect they are talking
about resistors etc.
Does it actually say that each signal must go to a single chip ? If not, then I
see no problems with
signals going to different (buffer) chips.  

Re: PCI compliance ?
My understanding is that you can only load once each line that's it ...

Re: PCI compliance ?
Mike Harrison schrieb:
Quoted text here. Click to load it

That is bad enough. Why should a microswitch be a sginificantly lower
load to a signal than an FPGA or ASIC? An IDT quickswitch can have up to
7pF capacitance.

"Section Signal Loading
Shared PCI signals must be limited to one load on the expansion board. [...]
It is specifically a violation of this specification for expansion
boards to:
* Attach an expansion ROM directly (or via bus transceivers) on any PCI pin.
* Attach two or more PCU devices on an expansion board [...]
* Attach any logic [..] that "snoops" PCI pins.
* Use PCI component sets that place more than one load on each PCI pin:
e.g. separate address and data path components.
* Use a PCI component that hast more than 10pF capacitance per pin.
* Attach any pull-up resistors or other discrete devices to the PCI
signals, unless thay are placed *behind* a PCI-to_PCI bridge."

Kolja Sulimma

Re: PCI compliance ?

The PCI standard is written to create and protect ASSP and ASIC device
sales, and is specifically worded to prevent alternate solutions.

Regardless, FPGAs find there way into many PCI applications.


Kolja Sulimma wrote:
Quoted text here. Click to load it

Re: PCI compliance ?
Quoted text here. Click to load it
  Does Xilinx marketing pay you a bounty each time you make such
an absurd claim on comp.arch.fpga?

  The PCI specs are written to guarantee interoperability across
best/worst case device/backplane loading and topologies.

  That the required bus loading excludes FPGA vendors who haven't
improved their Cin specs since their first family of 20+ years ago is
certainly not the fault of the standards.


Re: PCI compliance ?

I am paid (in very small part) to watch this newsgroup, and comment.

As for 'absurd claim', it seems you have never served on a standards
body, as your comment has me laughing.

A 'standard' serves the interests of the companies that promoted it (as
well as providing a service to the industry).  There is active work by
any standards committee to exclude/disadvantage/hobble as many
competitors as possible (legally).

I used to call it "making every participant equally disadvantaged in
order to level the playing field as much as possible prior to approval."

A great example of this is when an ASIC is developed, and the company
that has it, promotes it as a standard.  In the process of getting it
approved, it is inevitable that the standard will require a respin of
the silicon in order that all vendors have a chance to participate.  The
original vendor must also respin their chip, as they are not "standard"
until they do so.

I have seen this multiple times in my 13 years of sitting on multiple
ANSI/ATIS/IEEE/IEC standards committees.

Since you don't know this, I suggest you go and volunteer to chair a
committee, and learn something about the real world.


Re: PCI compliance ?
Quoted text here. Click to load it
 I already know plenty about the real world, thanks.

 Your suggestion that the bus loading specs of PCI are there
specifically to exclude FPGA vendors was, is, and remains, absurd.

  Almost as absurd as your previous posting tactic of repeatedly
claiming that you "meet all specs and standards" in spite of hard
facts to the contrary.


Re: PCI compliance ?

Well, we have agreed to disagree:  you have labeled me "absurd" and I
will consider you a "novice."

And please be so kind to keep us apprised of all of the "facts."


Brian Davis wrote:


Quoted text here. Click to load it

Re: PCI compliance ?
Quoted text here. Click to load it
 There you go with the name calling again.

 I did not say YOU were absurd, I said your POST was absurd.

 Perhaps some day you will understand the difference.

Quoted text here. Click to load it
 You can't handle the facts; when forced to face them, you resort
to childish name calling and tantrums.

 Oh, and I'm still waiting for your promised apology from this
charming post you made in August '04, where you wrote [1]:
Quoted text here. Click to load it

 Surely you remember, that's the thread where you attacked and
ridiculed me for pointing out that the S3 DCI overhead of
up to 2W per chip might be of concern in a small S3 design.

 Funny to read that thread again now, when you're claiming that
a 5W advantage in the biggest, baddest V4 is the greatest thing
since sliced bread.

 Strangely, when I pointed out exactly what you had omitted,
distorted, and mis-represented [2], you failed to cough up
an apology...


[1] http://groups.google.com/group/comp.arch.fpga/msg/dd96995737504055
[2] http://groups.google.com/group/comp.arch.fpga/msg/4a7fa8984b3395db

Back to Power?

DCI parallel power is fact.  Multiply the number of DCI by the power,
and you get an answer.

So what?  Our static power is demostratably less than other 90nm
"solutions", and our dynamic power is similar.  It amounts to 1 to 5
watts.  Could you program the DCI in parallel termination mode and eat
up the advantage?  Sure!  But then, I would not need hundreds of
resistors, either.  Did the job, with the same power, but with less
resistors.  You choose how to burn the power, you choose how to spend
the power advantage. Power advantage allows use of

Same design, less power.  Different design, different power.

The advantages are clear.  There is no apology required.


Re: Back to Power?

I have to weigh in on Austin's side on this one.  A parallel termination
  is going to burn a set amount of power for a given impedance.  Nothing
you can do about it and still have that termination there.  Period.

So the choice is do you dissipate that power on-chip to save board space
and parts count, or do you dissipate it in resistors on the board (that
if you have enough lines, frankly, may not fit and still be close enough
to the chip to do a lot of good).

On designs where we are going to be burning a lot of dynamic power, I
encourage my customers to not use the DCIs if they are concerned about
the temperature of the die, which can become a real concern with 400 MHz
clocks clocking a pretty full device.

The point is, the power required for the terminations is not something
that is variable, so it isn't fair to lump it in with the power
dissipation of the rest of the FPGA.  What Xilinx does with the DCI is
give you alternatives to the resistor farm surrounding devices with lots
of terminated lines coming into it.  Options is a good thing, and it
doesn't mean you have to do it that way.

DCI power variations
Quoted text here. Click to load it

As usual, Austin's response was a diversionary tactic that didn't
actually address the DCI issues I raised in that old thread.

 The concern is not the parallel terminator power itself, but:

  - the barely documented 200 mW per bank DCI overhead

  - the barely documented 20% hit from FreezeDCI problems

  - the non-repeatable, config-to-config variations in static DCI power
    due to the random end state of the DCI control logic

 On my first DCI design several years ago, this totalled a couple amps
of undocumented static VCCO current.


Re: DCI power variations

Quoted text here. Click to load it

OK, I wasn't aware of those issues.  So far, I haven't used DCI because
of die temperature concerns, so I haven't stumbled across the hidden

Re: DCI power variations
Quoted text here. Click to load it
 After last checking them in June '05, I summarized the poor state
of the DCI Power Answer Records over here:



Re: DCI power variations

Ah, finally we have some facts!


Quoted text here. Click to load it

Each bank requires two reference resistors.  The original power in these
was not documented, nor included in the power estimator.

Quoted text here. Click to load it

FreezeDCI solved a problem with the jitter introduced by amplitude shift
due to DCI.  Freezing it also stopped the reference resistor search,
which could (randomly) increase the ref resistor power (in V2).  This
has since been fixed in later families so that freeze is done better.

Quoted text here. Click to load it

as above

8 banks, time 200 mW = 1.6 watts. At 1.5 volts that would be one ampere.
  A little exageration here?  At 3.3 volts, that would be less amperes?

I can appreciate your being bitten by DCI in your application.  In its
first appearance there were some issues (all of which mentioned above)
which led to some problems with specific applications (primarily wide
buses with extremely critical timing using HSTL or SSTL parallel
standards).  Standards which crossed a bank also had issues, as the
controllers were independent (one for each bank, not synchronized).

The latest family DCI is improved in these areas.  But, the power is
still there.


Re: DCI power variations
Quoted text here. Click to load it
 IIRC, 2A extra per board, or about 400 mA extra per chip @2.5V VCCO,
for both bank overhead and parallel termination error, five 2V250's,
about 20 LVDS_25_DCI per chip.

Quoted text here. Click to load it
 Documentation thereof can be found where?

Quoted text here. Click to load it
 Funny how my facts of yesteryear have become your facts of today.


  Freeze DCI has nothing to do with it.

  Using FreezeDCI in the V2 affects both the behavior and
  repeatability (config-config & part-part) of static DCI power
  consumption, both for per-bank overhead, and particularly
  for per-input parallel terminators

  Freezing it also stopped the reference resistor search,
  which could (randomly) increase the ref resistor power (in V2).


   As each bank has its' own independent CCLK type oscillator
 driving the tap adjustments, you end up with a random sampling
 of the possible DCI adjustment states for each bank.

  As I understood it, the newer devices having DCIUpdateMode
 were going to cleanly stop the DCI updates in all banks at a
 known state rather than randomly halting them as with FreezeDCI.

  Standards which crossed a bank also had issues, as the controllers
 were independent (one for each bank, not synchronized).

 This has since been fixed in later families so that freeze is done


  DCI updating is only an issue when you cross between two banks,
  and even then only with the parallel interfaces where it adds some
  small amount of jitter

   On the other hand, with FreezeDCI on, the resulting random
  DC offset for the parallel terminators will probably cause
  problems for the single ended standards with accurate
  terminator VTT requirements (whether in one or multiple banks).

   which led to some problems with specific applications (primarily
  wide buses with extremely critical timing using HSTL or SSTL
  parallel standards).



[Brian_2004] :

Yet Another Misleading Post from Austin, a Xilinx(R) Employee

 The subject was your repeated habit of posting the same
misleading, demonstrably false information about your I/O
performance in multiple threads across multiple years.

 I used that particular  "apologize or else"  thread as an
example of your outrageous newsgroup postings.

Quoted text here. Click to load it
 An apology is long overdue.

 When I pointed out in detail exactly what 'facts' in your
attack posts were misleading, innacurate, and just plain
wrong, you failed to deliver on the promised apology.

 If Xilinx management really thinks your over-the-top, attack-dog
postings are winning them any new customers, or improving their
credibility, they are far more out of touch than this "novice" has
ever been.

 For examples of how to properly respond to a question about
Xilinx PCI compliance, without invoking the Great Specification
Conspiracy Theory, Google Steven Knapp's and Eric Crabill's
old posts on PCI compliance.


Quoted text here. Click to load it
 Plus the per bank overhead of ~200 mW/bank in V2 and S3

 Darn, you "forgot" that one again, didn't you?

 Maybe you can remind the S/W weenies to actually review  the
outstanding three year old CR's before the next release.

Site Timeline