pci-x133 to parallel pci-66

I would like to split a pci-x133 bus into 2 parallel pci-66 busses. Has anyone done this? I'm not afraid to purchase the Xilinx pci-x core and halfbridge IP but just looking for some wisdom.

|--------| | | | bridge | | | |--------| chad.

Reply to
Chad Bearden
Loading thread data ...

Hi,

Logically, what you described can be built with three PCI-X to PCI-X bridges.

You can take bridge #1 from PCI-X 133 to PCI-X 66. On that PCI-X 66 bus segment, you put bridge #2a and #2b, both of which bridge from PCI-X 66 to PCI 66. So, you can actually go buy three of these ASSPs and build exactly what you want.

I wouldn't want to turn you away from a Xilinx solution. A Xilinx solution could be a one-chip solution, offer lower latency, and provide you with the opportunity to customize your design in a way you cannot with ASSPs. However, you would want to carefully weigh the benefits with the downsides -- you will need to put in some design effort. Another thing to consider is cost, which will be a function of the size of your final design.

Good luck, Eric

Chad Bearden wrote:

Reply to
Eric Crabill

Followup to: By author: snipped-for-privacy@beardendesigns.com (Chad Bearden) In newsgroup: comp.arch.fpga

If you're looking for an existing silicon solution I believe you could do it with two Tundra Tsi310 parts.

-hpa

--
 at work,  in private!
If you send me mail in HTML format I will assume it's spam.
"Unix gives you enough rope to shoot yourself in the foot."
Architectures needed: ia64 m68k mips64 ppc ppc64 s390 s390x sh v850 x86-64
Reply to
H. Peter Anvin

If you mean putting both Tundra 310 bridges on a single pci-x133 bus I don't think this is electrically supported. As I understand it you can only have one load on pci-x133 bus. Please correct me if I have mis-stated your intention.

chad.

Reply to
Chad Bearden

Aren't you cutting your bandwidth in half? I would like to have the pci66 busses be able to run at full speed to access the host's memory (primary side of the pcix133 bridge #1). If you drop to 66 MHz here now my to secondary busses can only run at 1/2 there bandwidth _if_ trying to access host memory at the _same_ time.

Reply to
Chad Bearden

Hi,

Perhaps I am a bit jaded, but I think you will never actually realize anything close to "full speed" using PCI. (PCI-X has some improvements in protocol). Your statement assumes that both the data source and the data sink have an infinitely sized buffer, nobody uses retries with delayed read requests, and you have huge (kilobytes at a time) bursts.

It depends -- are you talking about "theoretical" bandwidth, or bandwidth you are likely to achieve?

If you are designing under the assumption that you will achieve every last byte of 533 Mbytes/sec on a PCI64/66 bus, you will have some disappointment coming. :)

PCI and PCI-X are not busses that provide guaranteed bandwidth. I've seen bandwidth on a PCI 64/66 bus fall to 40 Mbytes/sec during certain operations because the devices on it were designed poorly (mostly for the reasons I stated in the first paragraph).

While the point you raise is theoretically valid, you must consider that the bandwidth you achieve is going to be no greater than the weakest link in the path. What is the actual performance of the PCI-X 133 Host? How about your PCI 66 components? The bridge performance may be moot.

An interesting experiment you could conduct would be to plug your PCI 66 component into a PCI 66 host, and see how close to "full speed" you can really get using a PCI/PCI-X protocol analyzer.

Then, you could buy two bridge demo boards from a bridge manufacturer (PLX/Hint comes to mind...) and see what you get behind two bridges, configured as I described.

I would certainly conduct this experiment as a way to justify the design time and expense of a custom bridge to myself or my manager. While I suspect you won't get half of "full speed" in either case, I am very often wrong. That's why I'm suggesting you try it out.

I'm not trying to discourage you from using a Xilinx solution. However, I'd prefer that potential customers make informed design decisions that result in the best combination of price/performance/features.

Good luck, Eric

Reply to
Eric Crabill

That's an understatement!

--
Rich Iachetta
I do not speak for IBM
Reply to
Richard Iachetta

Also just type of transaction can contribute as well. Just TRY streaming through (in -> memory -> out) two 1 Gb ethernet ports when you have full rate, minimum sized packets, using PCI or PCI-X based hardware.

A very good attitude, I wish more companies would give such advice.

--
Nicholas C. Weaver                                 nweaver@cs.berkeley.edu
Reply to
Nicholas C. Weaver

Hi Richard, Can you be so kind and share some details over the effort of the Xilinx PCI-X core solution of this problem. I have a similiar problem - only all ports are PCI-X.

ThankX, NAHUM.

Reply to
Nahum Barnea

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.