L1 and L2 Cache

hello all,

I would like to know the difference between L1 and L2 Cache Memories.

with regards Ranga.

Reply to
Ranga
Loading thread data ...

Logically, one is merely faster than the other (usually), and the slower ones cost less, allowing the designer to use more of them.

Physically, it very much depends on the systenm you are talking about. some systems have all their cache on seperate chips (old!), some have some on chip, and L2 off chip. Some have more than one level of cache on the same chip.

Some systems have L3 cache even.

Theres also write buffers, minicaches, etc.

Much of the thinking in modern UNIX VM systems is that RAM is just another layer of cache between the storage (HDD/CD/FLASH/etc) and the CPU.

Sorry for the rambly reply but your question is a bit broad...

--
Spyros lair: http://www.mnementh.co.uk/   ||||   Maintainer: arm26 linux

Do not meddle in the affairs of Dragons, for you are tasty and good with ketchup.
Reply to
Ian Molton

There is no conceptual difference, really. The only thing necessary to turn a L1 cache into a L2 cache is to install another L1 cache between it and the thing it's caching memory contents for.

--
Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
Reply to
Hans-Bernhard Broeker

L1 Cache is inside the processor chip. It typically has "look ahead" circuitry so it can anticipate what the processor will need. Consequently, L1 Cache is usually very, very fast. For best performance, you want as much L1 Cache as possible. Byte for byte, L1 Cache is the most expense, and the most valuable of the two types of Cache. L2 Cache is located in chip(s) separate from the processor. It does not have as intimate a connection to the processor and so it cannot anticipate what the processor will need. L2 Cache's virtue is that designers of boards can choose how much L2 Cache to include. (With L1 Cache, the amount of L1 Cache is a chip-designer's decision.) Designers of a computer board for a particular application can choose how much L2 Cache to include, and what level of performance to pay for in the chips used for that purpose. Hope that helps!

-- Ed Skinner, snipped-for-privacy@REMOVETHIS.rytetyme.com,

formatting link

Reply to
Ed Skinner

I'm an embedded guy who has never had to deal intimately with anything other than a system with Level 1 cache (e.g. PPC 860). I'm curious about the cache line granularity of Level 2 & 3 caches. Are they always the same as L1?

I have needed to manually flush and invalidate caches for DMA controller drivers when a processor and DMA controllers do not snoop the memory bus. This means that the driver must be aware of the cache-line size, thus affecting the portability of the drivers between processors. If L2 and L3 cache lines are of a larger size, then these drivers become dependent on board specific L2/L3 implementations as well.

--
Michael N. Moran           (h) 770 516 7918
5009 Old Field Ct.         (c) 678 521 5460
 Click to see the full signature
Reply to
Michael N. Moran

Caches dont look ahead, CPUs do that.

also, L2 cache doesnt have to be off chip.

--
Spyros lair: http://www.mnementh.co.uk/   ||||   Maintainer: arm26 linux

Do not meddle in the affairs of Dragons, for you are tasty and good with ketchup.
Reply to
Ian Molton

That wasnt too precise either was it? ;-)

CPU *cores* do... :-)

--
Spyros lair: http://www.mnementh.co.uk/   ||||   Maintainer: arm26 linux

Do not meddle in the affairs of Dragons, for you are tasty and good with ketchup.
Reply to
Ian Molton

If you add "current usage" to that, I'd agree but only in part. What does the look-ahead is circuitry that watches the instruction stream (addresses and opcodes) and attempts to guess what will be needed in the future. This is different from the circuitry that computes 2+2=4. And this is different from the scoreboarding circuitry that keeps track of where, for example, register R5 is now across several parallel execution units spread across the processor wafer, and this is different from the circuitry that coordinates the handling of an interrupt from a device when there are several processor chips present in a system such that only one of those processors "suffers" (and handles) the interrupt. "Central" as in "CPU" doesn't really apply to these configurations. I contend that "CPU" is an archaic term that does not apply to the majority of 32-bit microprocessors today. (It still applies to most 8- and

16- bit machines, however.)
Reply to
Ed Skinner

Each chip vendor is trying to beat the competition and come out with something faster and cheaper than anyone else. The only rule is: beat the other guy. The only reason that a byte is eight bits wide is because of common usage over many years. (I worked on machines where a "byte" was 7 bits wide, others where it was 5, and two where it was 9 bits wide.)

There is no rule for cache line granularity other than what someone thinks will beat the competition.

Reply to
Ed Skinner

You may have a point there to some extent, what with superscaler machines nowadays, but ARMs arent superscaler, for example. they have a clearly defined ALU, etc.

Its like nailing shit to a wall isnt it? ;-)

--
Spyros lair: http://www.mnementh.co.uk/   ||||   Maintainer: arm26 linux

Do not meddle in the affairs of Dragons, for you are tasty and good with ketchup.
Reply to
Ian Molton

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.