Looking for a tools to report memory usage

I have a legacy embedded product written by C under IAR workbench. I want to know how much memory the program will consume on-the-fly, so I can decide a suitable size of SDRAM for it, not too small to affect the performance, while not too big to affect the BOM.

I don't have IAR workbench installed yet. Is there any tool that can help me to estimate the memory usage of a program even without running it in the target hardware? It sounds like I am requesting a dynamic software analysis ability from a static analysis tools.

BTW, is there any tools in IAR workbench that can report the memory usage of a running program? I am not familiar with IAR.

Any other idea for the task?

Thank you.

Reply to
Like2Learn
Loading thread data ...

A good part of the total should be statically allocated memory. For that you can just look at the map file -- it should spell out how much memory is used.

If you're not using the heap, then the next problem down is stack space. I _think_ there are tools that estimate this, but all I've ever done is fill the stack with 0xf00d or 0xdeadbeef or whatever, and look for the high water mark.

If you are using the heap but only to allocate at startup, then you need to use a heap manager that's instrumented. Look around -- everyone and their brother has written a 'malloc' at some point in their lives, some of them are even instrumented.

If you are using the heap, and you're not just using it to allocate memory at startup and never deallocate it, then you're eventually going to kill the app from heap fragmentation anyway, so there's no point in measuring.

--
http://www.wescottdesign.com
Reply to
Tim Wescott

Since it is a "legacy embedded product", it already exists and is (or was) in production. Why not look at the bill of materials and see how much physical memory it *had* previously?

Reply to
D Yuniskis

Some compilers can tell you the stack size for each function, and perhaps even for a whole call tree. GCC has the option -fstack-usage. The IAR compiler may have such an option, too.

There are also tools that use static analysis of the machine code to compute an upper bound on stack usage, for example Bound-T (from my company),

formatting link
or the other tools listed at
formatting link
But these tools only support certain processors, and (like the compiler-given stack sizes) can fail if the program is recursive or allocates stack space in very dynamic ways, for example with the alloca function.

You say that too little SDRAM may "affect the performance". This suggests that your program uses heap memory heavily and dynamically, or that you have a virtual-memory machine with a backing store (swap device). There has been research on static analysis to compute bounds on heap usage, but I don't know of any available tools for that. If the program itself adjusts its heap usage to fit the available memory, I don't think any general static analysis could tell you how much memory is needed for "good" performance.

HTH

--
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .
Reply to
Niklas Holsti

In message , Like2Learn writes

Load the IAR compiler. They usually come complete with C-spy debugger/simulator. That will give you all the answers you need along with the map file.

Which target is it for?

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

Because pretty much everyone using a compiler for embedded targets wants to know that, every linker worth its salt reports that in its map file. Your bad if the legacy code base wasn't stored along with the original map files.

No, you're requesting you get that compiler _now_. The only static analysis tool that will reliably reproduce the memory allocations of an actual compiler and linker are that same compiler and linker.

Reply to
Hans-Bernhard Bröker

at

=A0

Reply to
Like2Learn

Cause my boss and I are cheap, and we want to cut the BOM cost by using slower CPUs and use less RAM.

Reply to
Like2Learn

e

. =A0

Reply to
Like2Learn

Reply to
Like2Learn

l
Reply to
Like2Learn
[If you "Like to Learn", then consider learning about the disdain for TOP POSTING!]

On 3/14/2011 4:15 PM, Like2Learn wrote: >>> Any other idea for the task? >>

So, you've already looked at the existing RAM complement and decided that smaller devices *are* available and *are* economically feasible (?).

In which case, spend your NRE dollars figuring out how to save your recurring dollars!

[think: time, elbow grease, ...]
Reply to
D Yuniskis

At start-up, fill the RAM with some (non-zero) pattern like 0xDEADBEEF and after a year of continuous operation, check what is still undisturbed.

A simple (non-threaded) OS will typically use the RAM with a stack from the top of the RAM downwards and the heap starting at the bottom of the RAM upwards.

Reply to
upsidedown

For the last question, you should be do it without any fancy tools. If you are dynamically allocating memory, then you just write a function within the system mainloop, or an added task if rtos, which continuously outputs stack and heap values, free pool sizes, whatever. You may need to init the ram with an odd value first and restart for every run, but the rtos or allocator may have this data accessable anyway. Then, just put more and more load on the system until you get the info you need.

Even the smallest system can benefit from a syslog() type facility that is designed in from the start, showing what the system is doing while running and typically only needs 10's of lines of code to implement on a simple system...

Regards,

Chris

Reply to
ChrisQ

Easier to instrument your memory manager. Let it track maximum total allocation and store this internally -- report it when appropriately "kicked" (e.g., SIGHUP).

You need a good understanding of the system to know what *kind* of load is needed. I.e., understand it which conditions demands on memory are greatest. You also need to understand how that memory is being *used*. E.g., my systems will tend to "hunt" for the maximum memory usage point -- deliberately consuming as many resources as the system has been endowed with to improve performance/efficiency/responsiveness/etc. (within constraints set out at design time). So, in my case, the answer to the "how much memory does it need" question is: "all of it" (else you are paying for a resource that you don't need :> )

+1

Implement a black box in a portion of "unused" RAM, Flash, etc. This helps with post mortems. Note that it can be a "rich" interface (e.g., text messages, etc.) or a crude one (e.g., timestamp plus unique "situation code") that must be postprocessed to extract meaningful information.

During development, you can wire your DEBUG() to a stub that pushed characters out on your development system console. If carefully designed, you can share this "device" among multiple threads so each can tell the developer what it is doing (I used ANSI3.64 escape sequences to set the *color* of the text on a per thread basis so it was easier to follow what each task was saying) "SysInit: Creating task B at 0x34005000" "Task A: waiting for event 0x23" "SysInit: Allocating 0x4000 stack for task B" "Task D: raising 0x23" "Task A: caught 0x23" "SysInit: Starting task B" "Task B: Why is there air?" "Bill Cosby: To blow up basketballs!"

You can also put conditionals in the various DEBUG()'s to enable or disable, selectively, output (and quantity of output -- "debug level") from various tasks.

And, when you're ready to ship the product, recompile with:

#define DEBUG()

and you're ready to go!

Reply to
D Yuniskis

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.