Statically determining an allocation policy by an objects type?

Hi,

I'm trying to work on a memory manager for real-time operating systems and I was wondering if someone could help me out.

Since we want the cost of allocation to be bounded at runtime, I was wondering if there's any material (literature) out there that exaplains how one can determine what allocation mechanism/policy to use according to the type of the object being allocated (for example, I want all objects that have the type VM_Type to be allocated using a best-case free-list, whereas I want the rest to be allocated using a worst-case free-list algorithm).

Is such policy mechanism implemented in any system that anyone might be aware of? I'm interested in employing static allocation policies that are either determinable by the type of object being allocated or the allocation site.

Any material/examples would help me greatly because I'm not sure if I want to mess around with the compiler (in case I choose to determine a policy by the allocation site etc).

Thanks

Rick

Reply to
Rick
Loading thread data ...

If the allocation/deallocation must be time-bounded, maybe a allocation system named "Buddy" helps. Try google on this.

MfG,

--
Bernhard Roessmann
Don't Fear The Penguins!
Reply to
Bernhard Roessmann

Thanks Bernhard,

Actually I'm aware of that.. I'm also aware of region-based memory management systems (such as the one proposed by Tofte/Talpin).. but I was hoping someone would refer me towards articles/papers that talk more about "statically determining an allocation policy based on either the type of the object to be allocated or the allocation site".

I can't seem to find anything on this on google. Would you recommend something else? Thanks

Rick

Bernhard Roessmann wrote:

Reply to
Rick

You might want to look at my nmalloc, available at:

which was developed to make all operations O(1). This is definitely bounded :-) The system was written for use with djgpp, but requires only sbrk (and a suitable memory model) to function elsewhere.

--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
 Click to see the full signature
Reply to
CBFalconer

Thanks for that :)

I was still hoping if someone could provide me with links to literature because I'd first like to learn as much as I can about static analysis etc. Since you've made your own allocator, do you do anything optimizations as such? That is, depending upon some allocation site, change the allocation policy.. or depending upon some object type, use a different policy (switch from a linked list to an array for example..)?

Is there any material (mostly literature?) at all available on such matters? I've only heard people talk about it...

Thanks,

Rick

Reply to
Rick

Up until your last paragraph, I assumed you were looking to roll your own new() or malloc() -- i.e. something that the programmer more or less

*explicitly* invokes (even though constructors *implicitly* invoke new(), etc.) But, your last paragraph has me wondering if, instead, you want to gut the entire code generator and build a new run-time system "under" the language (which you haven't identified).

Obviously, you can design constructors for each particular "type" of object (blech... I have been trying to avoid using the terms "object", "class", etc.) so please bear with my imprecision :-/ ). This allows you to fold the memory allocation strategy "under" each particular "type" of "object" ("thing") as best suits your design goals.

Trying to write a malloc() that "knows" how to tailor it's behaviour to the type of "thing" being allocated/created is a bit tougher -- since malloc has no knowledge of the "type" of thing being created (unless the size of the item can be used to uniquely define the allocation strategy that you want employed).

Some OS's allow the user to redefine the underlying memory manager (here, using the term in the sense of virtual memory management). Some even let you define the memory manager (i.e. "routine") to be used for each *type* of "object" (thing). E.g., "files" can be mapped differently than "windows", etc.

In this latter sort of environment, the memory management policy is entirely at your discretion and can range from very static to very dynamic, depending on the type of "thing" being managed. (E.g., I let each thread's stack be mapped dynamically for most threads... though stacks for critical threads get wired down one the thread is first created -- for obvious reasons).

I'm not entirely sure where you needs lie, what the nature of your "things" (objects) are, etc.

--don

N.B. Incoming mail is not accepted at this email address.

Reply to
D Yuniskis

Yuniskis,

Thanks for taking your time for this :) Actually yes you did get me right.. I was hoping to look for real world examples where such strategies (i.e. statically scoping allocation policies depending upon the type or allocation site) were being used. You have given me some idea and it seems that a lot of memory managers do/allow this (?). I'm basically trying to work on the real-time Java specification and implementation and was wondering how I could employ such optimizations into the memory manager. Although dynamic allocation policies would be most suitable, however I wanted to start by learning more about statically scoped allocation policies.

Thanks again, and if you have more suggestions, please do share them with me :)

regards,

Rick

Reply to
Rick

Could you tell me which OS is this? (linux?) Thanks!

Rick

Reply to
Rick

Can you also refer to some paper/article that explains this concept a bit more :) Thanks a lot

Rick

Reply to
Rick

You might want to repeat your question in comp.compilers.

What you seem to be describing is informally known as a BBOP (for "Big Bag Of Pages") memory manager. The Mach OS employs a BBOP system as do a number of Lisp runtimes.

The basic strategy is to create different heaps for different types of objects and use the type of the object to determine which heap to use. The compiler can provide the statically known type of the object to the allocator as a parameter, or alternatively, the manager can use the identity of the allocating task or some aspect of the allocation, such as it's size, to determine which heap to use.

One problem with using a static method in a compiler is what to do with user defined types ... the compiler and memory manager can't possibly know what to do with them and so must choose some default method that may not be optimal. Some OO languages solve this problem by allowing each class to a provide custom allocator for it's own objects. However, effective use requires manual intervention by the programmer (who must implement the allocator in the class) and so is not a transparent, systemic solution that you appear to be interested in.

The bible of memory management is "Garbage Collection: Algorithms for Automatic Dynamic Memory Management" by Richard Jones and Rafael Lins, ISBN 0-471-94148-4. [ This edition is from 1996 - there may be a newer one. ] Even if you are not particularly interested in GC, it contains a lot of useful information about allocation policies and implementation strategies.

[ Incidentally, if you are interested in type analysis, you probably should be interested in GC, because a lot of the literature assumes the context of functional languages ... which require GC because the referential transparency that makes functional reasoning possible also makes statically determining lifetime for all objects virtually impossible. ]

Static type analysis is a very hot topic and there are many papers available - however, most are pretty deeply theoretical. A good place to start is with papers on Hindley-Miller type inferencing which is a moderate complexity, proven method used by a number of language implementations.

As a primer for compiling and implementing strong, static typed languages I would find papers on Pascal (the original strongly typed language) and on Modula 3 or Oberon - modern derivatives of Pascal which mix strong static typing with OO but don't have type inferencing. From there you can go on to read about the functional languages: ML, Haskell, Python, Common Lisp, Scheme, etc. Although CL and Scheme both use runtime dynamic typing exclusively, they are (arguably) both functional languages with a lot of the same compilation and implementation problems as the other, statically typed languages.

You may also wish to lurk in comp.lang.functional, comp.lang.lisp and comp.lang.scheme because the subject of type systems and type analysis comes up rather frequently.

George

Reply to
George Neuner

Thanks a lot for that George! That helped a lot!

best regards,

Rick

Reply to
Rick

Others have given you some literature links. About the only efficiency problem with malloc in general is that the system has to maintain internal information. In nmalloc this requires 16 (or

24, for greater security) bytes per allocation, which can mount up if there are many small allocations needed. This can be improved by creating a larger allocation block and making user allocation from that, and is something that can be added later. I may eventually do some such with nmalloc, but I consider it an unnecessary complication for the intended (djgpp) usage, where there is no great shortage of memory. It can fairly easily be designed on top of the existing system, and the discrimination would be on the basis of the allocation size requests.

Heavy complications would arise in the debug provisions of the malldbg module.

--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
 Click to see the full signature
Reply to
CBFalconer

Mach and it's derivatives (Mach, Mach3, Mach4, RTMach, FORTS, OSF1, NeXT, MKng, TMach, etc.) support the concept of an "external pager".

Chorus has similar support (though perhaps not as "rich"?).

Jaluna lets you replace *the* system pager (though support for multiple pagers would be something you would have to kludge in on your own).

Some of the free Eunice's have borrowed bits of their pager from Mach though I suspect that is in a "plain vanilla" form (and I am unsure as to current status on any of their implementations)

Note that replacing the pager really takes more of an underlying swipe at the VM system, itself. From your original post, it seems like your work is more oriented towards the role malloc() plays ON TOP OF the underlying memory system. (e.g., the external pager ideas are hard pressed to work in a system that doesn't support VM -- though the hardware need not be page based)

--don

N.B. Incoming mail is not accepted at this email address.

Reply to
D Yuniskis

you

A good text on whichever OO language you are interested in should cover the subject. Essentially, you want to apply whichever memory allocation algorithms you consider appropriate for the "item" type (heh heh heh... avoided using "object"!) at hand.

So, in the constructor for that particular item type, you invoke whichever flavor of *your* "malloc()"-like function to create the item itself -- along with its "contents". Similarly, the destructor for that particular item type knows (by your design) how to return the resources set aside in the item's creation to their respective allocators.

For example, I have a variable precision math library that allocates the "Number's" from a pool of fixed size buffers (called a "partition" in some systems). Since each is a fixed size, this makes the allocation extremely fast (O(1)) as well as letting me copy/move values around easily.

OTOH, the "digits" of the actual value being represented ("Coefficient", "Characteristic", "Mantissa", etc. -- whichever misnomer you would care to apply... :>) are allocated in a more conventional sense as their individual needs vary dynamically (i.e. when a "Number" gains a few extra digits of significance, the container holding those digits typically needs to grow in size/capacity).

Obviously, when either component in these numeric "object"s is released (at object destruction), it's resources need to be returned to the proper "allocator" for reuse.

--don

N.B. Incoming mail is not accepted at this email address.

Reply to
D Yuniskis

Given your mention of Java, I would assume you are looking more for malloc()-type replacements than for changes to the VM's pager (since Java does not require VM).

Look for details on constructors/destructors, "overloading new", etc. Sorry, I don't know how much Java restricts the abilities that C++ affords in this regard so can't tell you whether what you want to do can be done *atop* the language (user-land) or must be implemented *within* the language (compiler-land)

--don

N.B. Incoming mail is not accepted at this email address.

Reply to
D Yuniskis

... snip ...

O(1) allocation is not hard to achieve. The problem is O(1) free() (and thus possibly realloc()) when there is a need to recombine all freed areas.

--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
 Click to see the full signature
Reply to
CBFalconer

Note: ...from a pool of fixed size buffers (called a "partition" in some systems).

--------------------^^^^^^^^^^^^^

Both malloc and free are *inherently* constant time operations in such a scheme (hence the reason it was chosen! :>)

--don

N.B. Incoming mail is not accepted at this email address.

Reply to
D Yuniskis

Thank you so much for your help. All of you guys have given me wonderful ideas and references to begin with. Now I'll go look for some stuff on "allocation policies based on allocation sites"! :) (in case any of you guys have references for that, I'd love to have them).

best regards,

Rick

D Yuniskis wrote: [...]

Reply to
Rick

^^^^^^^^^^^^^

Typo - should have been "Hindley-Milner".

George

Reply to
George Neuner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.