Why 64-bit PLB?

Can anyone tell me why the default width for the PLB in EDK is 64-bits when the PPC is a 32-bit processor? I have nothing in my design that is 64 bits wide. Am I wasting power and resources by using 64-bits?

Thanks, Clark

Reply to
Anonymous
Loading thread data ...

One simple reason off the top of my head: the PowerPC might only have 32-bit regsiters internally, but its instruction and data caches fetch data in blocks of 128 bits at a time. So a 64-bit bus allows them twice the bandwidth to/from memory compared with a 32-bit bus.

Cheers,

-Ben-

Reply to
Ben Jones

And here's where the beauty of FPGA's comes in. You can just try it an see if it is faster!

You're right, you could probably use a 32 bit bus. But perhaps 64 bit data ties up the plb bus less. Or two cycles of 32 bit data causes some sort of funny stall in the ppc. Or maybe there isn't much difference in size between 64 bits and 32 bits.

(And 64 bit brams can be made with two 32 bit brams).

Alan Nishioka

Reply to
Alan Nishioka

100

if

for

Actually, turns out you can't. I just realized that it is hard coded to 64 bits. I'm going to try putting everything on the opb side so that the only thing on the plb is the plb2opb bridge to see if I save resources and power. Since PLB, OPB, CPU, and my DDR SDRAM are all running at 100 MHz I still don't see how a wide PLB would help?

The only way it would seem to be helpful is if the external memories ran faster than the PLB. For example, if my 32-bit memory ran at 200 MHz, the PLB would only have to run at 100 MHz to retain the bandwidth. Of course, the opposite scenario is more likely (slow external, fast internal).

If I ran my CPU at twice the plb frequency the plb bandwidth would match the cpu bandwidth but this only works if me external memory bandiwdth also matches otherwise the only speed up is when running from the on-chip cache which doesan't use the plb anyway?

There has to be a reason xilinx forces 64-bit. Is it getting data AND instructions in parallel?

Thanks, Clark

Reply to
Anonymous

Anonymous schrieb:

there is exactly one and very simple reason:

IBM CoreConnect defines PLB as 64 bit bus. Xilinx is bound to CoreConnect standard. Simple as that.

PPC405 hard macro PLB DBUS *IS* 64 bits

If it makes sense to have 64 bit bus when max external memory width is only 16 is another thing.

Antti

Reply to
Antti

Thinking about it some more, the reason it is 64 bits is the ppc405 core from ibm is 64 bits.

formatting link
$file/PPC405_Product_Overview_20060902.pdf

The ppc has two plb interfaces so it can get data and instructions at the same time, but you probably have them both hooked up to the same bus.

But this is probably not making much of a difference in size or speed anyway.

Alan Nishioka

Reply to
Alan Nishioka

Some other reasons not mentioned in this thread:

- these days DDR2 memories with 64 bit interfaces are not uncommon, e.g. ML410 board. The bandwidth in this case is 1600 MBps which is twice the PLB bandwidth.

- the bus is used not just for transfers to/fro the CPU, but also for DMA from a peripheral (Ethernet) to memory.

/Siva

Reply to
Siva Velusamy

But according to the documentation, the native hard PLB interfaces on the PPC core can be configured to operate in 32 bit mode, and they will adapt themselves to just use 32 bits (i.e. you don't need any external

64 to 32 bit mux). For instance, see the PLBC405DCUSSIZE1 parameter in the Power PC 405 Processor Block Reference Guide.

-Jeff

Reply to
Jeff Cunningham

At the EDK level it looks to be hardcoded to 64? Do you know how I would specify 32 and/or confirm that it is 32?

Thanks, Clark

Reply to
Anonymous

Jeff,

I went down this path about 9 months ago. The IBM core connect specification is definitely 64-bits wide AND it does allow for 32-bit bus implementations. However, to do so, requires targets to respond in correct word-lanes. Unfortunately the bridges implemented by Xilinx do not allow this to happen. At one time I found an application note at Xilinx that said the reason this was done was so that they could just implement a wired-data bus between all slaves/masters. By that, I mean to say that all slaves drive an answer back for every cycle and if any slave bits are 1 it gets passed that way to the core. I actually verified that the core signals allowing for non-64-bit operation are hardcoded by digging through all of the EDK VHDL source code for the PLB and OPB bridges. I actually made a custom version of the PLB bridge that allowed for 32-bit operation (I couldn't take the latency through the bridge) ... unfortunately since the bits of the busses could not be wired together, the routing got to be unruly very quickly. In the end, I decided the best approach was to attach 32-bit and less interface to the OPB and 64-bit busses to the PLB. Where I needed speed, I used a PLB slave and wasted the extra bits.

Trevor

Reply to
Trevor Coolidge

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.