pci and caching

It's not clear to me why this needs to be resolved by the BIOS.

First, the fact that the card is placed at a high memory address is unrelated to caching. The BIOS putting things at high addresses is convient to avoid things at low addresses (such as the 2G of RAM you might have). But that doesn't make the area unworthy of caching.

Second, what kind of device would sit on the PCI bus that is simple enough to not need a device driver and yet requires caching to be turned off for that area?

The only bits in the PCI configuration space that go with the request for a range of memory are: prefetchable, type (2 bits identifying where it can be placed) and a memory versus I/O flag. That's all the BIOS has to work with.

Why would a device on the PCI bus not want to have its memory range cached? Because the memory can change by means other then the system CPU. For example, our cards have serial chips which have their internal registers mapped to PCI memory space. If the CPU writes to one of these, it can't be cached -- it needs to go right through to the memory/register immediately. Likewise, the CPU can't refer to its cache to get a value. The registers change all the time based on what's going on on the serial line. So any cache would instantaneously be "dirty".

The above card would be useless without a device driver. What kind of situation are you worried about needing to have the PCI device's memory range uncached that is simple enough to not need a device driver?

Steve

-------------------------------------------------------------------------------------------- Steve Schefter phone: +1 705 725 9999 x26 The Software Group Limited fax: +1 705 725 9666

642 Welham Road, Barrie, Ontario CANADA L4N 9A1 Web:
formatting link
Reply to
steve_schefter
Loading thread data ...

My question was actually an attempt to understand how the BIOS sets up caching and, in general, what devices on the PCI bus might get cached, and what controls whether they do. All the answers, so far, is that nobody knows.

Specifically, we often write test programs to run under DOS so we can test interfaces at maximum CPU speed, without a stupid OS gobbling up resources in millisecond chunks, and without having to write drivers for untested hardware (google "chicken, egg"). Since any pci-compliant BIOS actually finds our interfaces and assigns memory resources, I was wondering what the caching situation is. Again, nobody seems to know.

Besides, a device isn't "useless without a device driver" as long as an application can get at its registers somehow. Could be more useful, actually.

John

Reply to
John Larkin

You can ask on comp.sys.ibm.pc.hardware.chips, but you'll get the same answer I gave you. PCI addresses are *not* cached. The memory configuration, including cacheability, is set up by BIOS (or perhaps the OS) using the MTRRs on the processor and north bridge.

If you want to test at the highest speed possible, I'd use a PCI initiator to do the work. Crossing the processor bus/PCI bridge takes time which varies between north bridges. Some don't allow much bandwidth from the CPU and you can't test PCI bursts at all.

Sure, as long as you accept the security/stability implications that go along with writing directly to system resources from user space.

--
  Keith
Reply to
Keith Williams

I suspect you're right about that.

Windows has such a wealth of security and stability holes, we can hardly make it much worse by accessing our own registers. I find Microsoft's warnings to be highly ironic in this respect.

John

Reply to
John Larkin

John, cachability is controlled by the memory controller inside the processor. There is a bit in each page table entry that describes whether that 4k page may be cached. These are the same entries that will map the physical PCI addresses into your logical address space, so you were going to have to play with them anyway.

What version of Windows are you planning to use?

Find the DDK for that version and start tracking down the page table manipulation routines. Find a small sample driver that maps pages, and modify it to meet your needs.

- Tim.

Reply to
tbroberg

As a PCI device driver writer, I know. You just don't seem to want to believe me ;-)

Assigning resources (done by the BIOS) has nothing to do with caching. The device can be placed into I/O or memory according to the resources it asks for, but this is unrelated to whether the CPU will use its cache when accessing that memory range. The latter is under control of the device driver when it maps the range into virtual memory.

Which registers? They only registers that are generic to all cards are the PCI configuration space registers and they are generally accessed via configuration space (different from memory and I/O space). In configuration space caching doesn't apply. The registers (or other parts of the card) which may be mapped into memory space are specific to that card and therefore raise the question of caching. They are different for every card design and therefore not useful without a device driver that understands what hardware is involved with the memory.

You can't decide whether caching can lead to data corruption (and therefore should be disabled for the range) unless you understand the hardware (ie, you're the device driver). As I pointed out earlier from the PCI spec, there are only 4 bits that go with the memory request and none of them indicate whether caching should be on or off. Since the device driver does the virtual mapping and it has to know the hardware well enough to know whether caching is appropriate or not, there is no point in putting it in configuration space.

Steve

Reply to
steve_schefter

Steve,

I did a handful of PCI designs (for use on PMC sites in VME SBCs). The SBC of course ran Linux or VxWorks or Integrity or whatever, but we also had a low-level monitor/debug environment roughly equivalent (OK, far superior!) to DOS BIOS.

Most of the designs used PLX chips. PLX puts the registers needed to configure their chips' peripherals in BAR 0 and BAR 1. I'd generally put my design's application-specific registers in BAR 2. On-board memory goes in another BAR. And so forth.

As the board booted, the monitor enumerated the PCI bus and assigned valid base addresses to all of the PCI devices. From its command line, you could do the equivalent of PEEK and POKE to the PLX peripheral control registers or to the board's custom registers. Standard memory-dump and memory-modify commands were available, too,

So, what John wants to do -- peek and poke registers, etc -- is reasonable.

As for caching -- that really depends on how the system controller chip is set up. I don't know to what extent the BIOS firmware sets up a PC's system controller; presumably, it does enough to find the boot device for a higher-level OS, to which it passes control. The OS then may set up the system controller in ways that depend on its needs. One presumes that the system controller driver honors the cacheable bit and doesn't cache a memory space declared as non-cacheable!

-a

Reply to
Andy Peters

OK, but I believe it is right to say that *only* prefetchable memory can (also) be cacheable?

Which is exactly why I was surprised that they (PCISIG) would even attempt to handle it! :O Thinking about it, I suppose if the bus doesn't specify it, then what other scope to do so is there?

OK, I stand corrected. Although I have worked on pre 2.2 designs, this was never an issue so it was duly forgotten.

I've learnt something new today. Can I go home now? ;)

Regards, Mark

Reply to
Mark McDougall

Yes, but I would word it as "cacheable memory must be prefetchable but prefetchable memory may or may not be cachable". Just to be clear.

Reply to
slebetman

This was kind of a tangent to the caching discussion. But sure, no issue with doing this. What you are doing is essentially is putting the knowledge of the hardware/registers into the peek/poke operator instead of a device driver.

In order to get at the high addresses that the BIOS will put the PCI device at, I suspect that some sort of extended memory add-on will be required (haven't worked in DOS in ages). Whether it defaults to caching the memory or not will be dependent on that software. If I were writing it and had to pick just one, I'd disable caching for that memory range to ensure compatibility, though at the cost of performance.

Reasonable, although it depends on the OS. In the ones I've worked with, you can't take a default but have to indicate the characteristics (including cache/non-cache) that you need.

Also, assuming an OS/drivers that left the setup entirely to the BIOS, the implication would be that the entire PCI range would be non-cached since the BIOS has no way to know if caching will cause grief for any particular card.

Steve

Reply to
steve_schefter

Ok, I can't think of a case where one would want data to be cached but not prefetchable. I suppose I could come up with some weird case where data is changed on read but it's needed several times. Of course one would simply copy the data into memory (or register).

I suppose: cacheable => prefetchable, but prefetchable /=> cacheable ^ +- does not

I've never seen it used either. The SBO# and SDONE signals are tied off in the designs I've seen/done (but there). I only remembered it from a MindShare PCI class I took moons ago. The instructor didn't much like it either. ;-)

Got coffee in hand. It's just time to get started! ;-)

--
  Keith
Reply to
Keith Williams

PCI devices (peripherals) identify via Base Address Registers (BARs) the size and type(s) of address space (IO, memory or prefecthable memory) that they need the BIOS and/or operating system (plug and play code) to be mapped as physical addresses on the bus. These addresses are typically accesed by device drivers (but might also be accessed by other devices). The statement that...

"If a PCI memory space is marked as 'pre-fetchable' then it guarantees, among other things, that the act of pre-fetching memory has no side-effects."

... is the best summary statement of the signifigance of 'pre-fetchable' memory on PCI. It is used purely as a performance optimization that allows for use of burst read transactions (specifically MRL and MRM) when accessing that address region.

This has nothing to do with cacheing. While the original PCI specification envisioned the ability to have cacheable memory accessed via the PCI bus (and includes SBO# and SDONE to implement a cache coherency protocol for PCI) I am unaware of any system or design that ever implemented this. In subsequent specifications (i.e. PCI 2.2) it was made clear that this functionality was being demoted and marked for removal in future specification revisions.

Hope this helps.

TC

Reply to
TC

Thanks. The concensus seems to be that the Bios allocates requested PCIbus memory resources but they are never cached. That seems to align with my experience.

John

Reply to
John Larkin

Well, we would al be using apple macs and bill gates would be poor. There would be no dos, and no doubt all the linux guys would then hate the origonal Mac OS. Then again, we could be really unluck and be all stuck using OS2 blah blah blah.

Reply to
The Real Andy

Note that these signals are related to cache within PCI bridges and targets that support cache. They aren't involved when the cache is within the CPU. So you still rely on software setting up the range of memory on the PCI device as non-cached (ie, non-CPU-cached) even as the above mechanism is deprecated.

I hadn't thought about caching within the bridges before you mentioned it in this thread. Ouch, I can see why it's deprecated. There's generally good OS support for a device driver to disable the cache within the CPU for the range of memory corresponding to the card, but wouldn't be able to touch this in the bridge chip very easily. Posted writes to memory mapped hardware registers are bad enough, but cached would be a killer in cases like our cards.

Steve

Reply to
steve_schefter

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.