data pool and system pool

Hi, Is there any special advantage/use/reason for using system pool ?

Where is the 'system pool' present ? and where is the 'data pool' present ?

Why are system structures not using data pool ? What is the problem with data pool ?

Is there any Processing Speed advantage ?

Thx, Karthik Balaguru

Reply to
karthikbalaguru
Loading thread data ...

That depends on how you use it. More to the point, it depends on your definition of "system pool".

In an embedded system, I suppose you locate the these pools (I presume RAM resource) anywhere you choose, for example, through your linker control file.

(And from here the questions get more mysterious. We, your audience, sure could use some subject context.)

JJS

Reply to
John Speth

It should be noted that the concept of 'data pool' and 'system pool' are specific to the VxWorks TCP/IP stack. (In particular, the stack in VxWorks 6.4 and earlier. In 6.5, there's a new stack which is internally very different from the older BSD-based stack, and doesn't use the same memory management scheme.)

The data pool is used to assemble packets. Since you don't know ahead of time what size a given packet will be, the data pool is populated with buffers in various 'handy' sizes, and the stack will try to make the best use of them that it can.

The system pool, on the other hand, is always used to store internal structures used by the TCP/IP stack, which are all of fixed, known sizes. The buffer sizes in the system pool are chosen to match the internal structure sizes, and there aren't any really big ones (none of them are anywhere near 1500 bytes, so there's no cluster pool that big in the system pool). This yields a slightly more efficient use of RAM, since you'll never end up wasting a really large buffer to hold a really small internal data structure.

The reason for using netBufLib for internal structures in the first place is performance: you could just use malloc() instead, but that has higher overhead, since it requires dealing with the heap manager (memPartLib). netBufLib carves up fixed sized chunks of space and keeps them in a list, and allocating/freeing a buffer from/to the list takes very little work (use intLock() to enter critical section, update list pointers, use intUnlock() to leave critical section). This yields better performance in situations where you know you will be allocating/freeing buffers of a fixed size very frequently. (It's not quite as good in the data pool case as it is in other cases, i.e. the system pool or driver RX pools, but it's still faster than malloc()/ free().)

In the original BSD implementation, there isn't a 'system pool' or 'data pool.' Instead, you have the mbuf pool and the mbuf cluster pool. The mbuf pool contains mbuf structures, each of which has a small amount of local storage (typically 128 or 256 bytes, depending on what particular BSD implementation you have). The mbuf cluster pool is just a pool of 2K buffers which can be combined with an mbuf to form an mbuf/cluster tuple. The clusters are used when you want to store more data than will fit in a single mbuf's internal storage, and you don't want to chain a huge number of small mbufs together. In the original BSD code, internal data structures are sometimes stored in mbufs, but in that case, they don't come from a special pool: they're just stored in single mbufs without external clusters (because the structures are small enough to fit in the 128/256 bytes available). For various reasons, VxWorks doesn't use exactly the same memory management scheme, but even with the data/system pool abstraction, the basic logic is preserved.

-Bill

Reply to
noisetube

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.