Memory management

Hi all

I would appreciate someone who can answer some questions about OMAP MMU usage. Probably someone with knowledge about any chip with a memory management unit could answer since these are very basic questions...

TI's OMAP requires the MMU to be enabled if you want to use the data cache, but I don't understand why... is it required whatever the chip is?

It seems to me that MMU manages logical memory addresses and memory protection, but I do not see any relationship between these MMU task and the data cache, probably due to my inexperience...

Another important question for me is how much RAM the MMU needs to perform logical to physical addressing. Has this amount of RAM a fixed value? If I just want to use the data cache but I don't mind about address translation nor about memory protection, do I still need some RAM memory?? How much??

Thanks to all

Daniel

Reply to
Dani
Loading thread data ...

While not all CPUs may require it, it's a sensible design decision. Caching is, to some extent, an MMU function --- it's a layer of functionality sitting between the CPU core and external memory, and it's supposed to be mostly transparent to the running program. If both caching and other MMU functions are present, they have to be integrated so thoroughly that it would cause major hassle for little gain to allow them to be turned on separately.

Consider this: the difference between a memory protection violation and a cache miss is not all that large. In both cases, a simple memory access may cause extensive extra activity, if the exceptional case happens.

--
Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
Reply to
Hans-Bernhard Broeker

Thanks for your comments Hans,

The real problem for me is that I would like to use the data cache in the ARM (I hope performance to be better) but it require the MMU to be enabled and MMU requires some RAM for working. RAM memory is very limited in the system so it is not an easy task (we need to assign some RAM for the MMU but there is not that ammount of free RAM in the system).

I understand caching and memory protection are similar in some points but I would like to understand if it is necesary to reserve a part of RAM for the MMU if I don't use virtual adressing nor memory protection. If so, I also would like to know if this method has any disavantage I cannot see (probably it has, because it seems to me this is not the usual way of working...)

Thanks for helping this newby guy

Daniel

"Hans-Bernhard Broeker" escribió en el mensaje news: snipped-for-privacy@news.dfncis.de...

Reply to
Dani

[Thanks for not top-posting with an unedited fullquote below, next time.]

Yes, because the way MMUs usually work (I don't really know that of the ARM), there is no such thing "I don't use" virtual addressing or "I don't use" memory protection --- these are implemented instead as "physical address == virtual address" and "all access to all address allowed", and it takes MMU descriptor table memory to set up these configurations. TANSTAAFL.

--
Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
Reply to
Hans-Bernhard Broeker

[Thanks for not top-posting with an unedited fullquote below, next time.]

Yes, because the way MMUs usually work (I don't really know that of the ARM), there is no such option as to say "I don't use" virtual addressing or "I don't use" memory protection --- these are implemented instead as a virtual memory mapping "physical == virtual" a protection setup "all access to all address allowed", and it takes MMU descriptor table memory to set up these configurations. TANSTAAFL.

--
Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
Reply to
Hans-Bernhard Broeker

For one thing, you've got the MMU you've got: too late to ask "why?" :) Now, if you know (by the nature of your app and/or by profiling) what your time-critical data and/or functions are, you may want to disable cache and move data or functions to your hand-made "cache" by hand. For const data and functions, you need only to move in one direction. Many compilers/linkers support the trickery of compiling a function to be executed from a location different from where it is stored. The problem is, those functions are copied to your RAM statically, during runtime environment initialization, just as non-const initialized data are. To make the copying happen at your will you may need to fiddle with linker settings and steal the functions' boundaries from it. There are a few ways of doing this, depending on the toolchain. Finally, ARM code is almost always binary-relocatable (the only exception is B an BL instructions to places not included in relocation area), so if you call other functions via pointers passed to your function, you can relocate your function anywhere.

AFAIK, yes, for the ARM MMU. If so, I also

If you create cache, you sacrifice some RAM for it, and manual scheme above may happen to be better. You may also need to fiddle with the MMU configuration to get max performance for *your* app, so you still would need to learn the app's needs.

Regards, Ark

formatting link

Reply to
Ark

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.