Use RAM to modify a field

I need to use Copy and 8K block of data (from a parameter block address in flash) to RAM, modify data at a particular offset and copy the contents back to flash. The size of the RAM is 256K. I believe dynamic memory allocation with heap (using a pointer)should work for this? Would like to know if this would be the right way togo?

Note: Please notify if any detail seems missing.

--------------------------------------- Posted through

formatting link

Reply to
srao
Loading thread data ...

A general description of what you're doing would be useful.

If it's an embedded system and it has to run forever, using the heap is probably a bad idea (Google "heap fragmentation"). Truly paranoid* embedded engineers would allocate the block statically unless that makes the system run out of RAM. If they did NOT allocate it statically, then they would ask themselves if they have enough memory in the first place.

  • In embedded, "paranoid" often means "successful"
--
Tim Wescott 
Wescott Design Services 
 Click to see the full signature
Reply to
Tim Wescott

When you have 256K of ram on the chip, and only need 8K for this block or buffer, then it's not just the paranoid developers that would use static allocation - it's the lazy* ones too.

  • In embedded, "lazy" also often means "successful". Good enough is good enough - a program that does it's job in a simple way is at least as good as one that does the job in a more advanced way, while the simpler way has lower development costs and therefore a happier customer.
Reply to
David Brown

s
c

ld

s
n
.

also simpler usually means less bugs...

Bye Jack

Reply to
Jack

And the simpler way is more likely to be right (both now and especially in the future when somebody else has to make changes to it).

Good enough is often better.

--
Grant Edwards               grant.b.edwards        Yow! MMM-MM!!  So THIS is 
                                  at               BIO-NEBULATION! 
 Click to see the full signature
Reply to
Grant Edwards

address

dynamic

Would

Hi Tim,

Thank you for your response. To describe the goal briefly, I want to update a particular field in this block. I use a given address offset to update this 32 byte field. Please let me know if you need more info.

--------------------------------------- Posted through

formatting link

Reply to
srao

address

dynamic

Would

Also,

Would static allocation utilize RAM space? I am using this for an embedded application which has to read data out from hardware, store it in RAM and modify the field.

The RAM here is not part of processor.A separate flashchip has boot+flash+ RAM region which the application uses. C++ borland compiler in use. Does static memory allocation use space out of RAM? Or does it depend on the compiler?

--------------------------------------- Posted through

formatting link

Reply to
srao

True, in my experience -- each line of code is an opportunity for a bug, so fewer lines of code often means fewer bugs.

(Unless the pointy-haired boss starts handing out awards for fewest lines of code to get the job done -- then your job becomes an obfuscated code contest, and bug counts go up.)

--
Tim Wescott 
Wescott Design Services 
 Click to see the full signature
Reply to
Tim Wescott

I believe that you are asking if using the "static" keyword in some C or C

++ code would cause the tools to put the space in RAM.

The answer is that if the tools are working correctly, and if the space is not declared as "const", then yes. The tools working correctly depends on both the tools (Borland was still good the last time I used it) and how they're set up, though, so I can't just say that it WILL work for you based on the fact that it's Borland.

HOWEVER -- if you're declaring a variable outside of any struct, function, or whatever, then "static" is telling the compiler about who can see the variable, not whether the variable is writable (and thus in RAM) or not.

Here's what SHOULD happen in most embedded systems, in C or C++, if you declare a variable outside of any struct, function, or whatever:

// "whatever" gets put in read-only memory, and is visible outside of // the resulting object file const sometype whatever = ;

// "whatever" gets put in read-only memory, and is only visible to // things within the C file: const static sometype whatever = ;

// "whatever" gets put in RAM, is visible outside of // the resulting object file, and (if the tools are set // up correctly) filled with 0 bytes: sometype whatever;

// "whatever" gets put in RAM, is only visible to // things within the C file, and (if the tools are set // up correctly) filled with 0 bytes: static sometype whatever;

HTH

If David Brown says that I'm wrong, just fill in a resigned sigh from me and believe what he says.

--
Tim Wescott 
Wescott Design Services 
 Click to see the full signature
Reply to
Tim Wescott

I failed to mention:

If you want to permanently allocate some space at compile time, from within a function, then you need to use "static" -- otherwise the compiler will try to create it on the heap or on the stack. For an 8kB hunk of memory, this would probably cause Bad Things to happen.

void my_function(void) { char bob[8192]; // Allocates on stack or heap -- ick static char sue [8192]; // allocates in RAM, only visible // within this function }

Note that doing this, with this large a chunk of memory, is probably an indication that you're doing something wrong. I could easily see your buffer as being static within a C file (or a C++ structure) that's dedicated to talking to flash -- but I can't easily see it within a function.

--
Tim Wescott 
Wescott Design Services 
 Click to see the full signature
Reply to
Tim Wescott

It depends very much about the expected number of units produced every year. If the expected volume is 100 units or less, it makes sense of less optimized, preferably COTS devices.

However, if the expected volume is millions of units each year, t makes sense to do a lot of optimization.

An anecdote from the TV industry when discrete components were used: Quite a lot of engineering effort could be used to optimize some circuit so that cheaper resistors with 20 % tolerance could be used, instead of the 5 % resistors originally designed.

Reply to
upsidedown

Eh, I've definitely decided I'm just going to drop a megablock like that on the stack. Especially if: A) You're trying to be MISRA or MISRish, and have nixed malloc from your vocabulary; and B) You've got a single uniform memory, i.e. on-die RAM is enough.

Once you don't have a heap the only things left are static allocations and the stack, so you might as well have a giant stack.

The nice thing about putting it on the stack is that it's self-cleaning. Unlike with heap allocation there's no concerns about fragmentation or running out of memory; you hit that closing brace and the allocation neatly and perfectly unwinds itself. Especially since, for something like a flash update, it's not like you're going to be doing any kinds of deeply nested calls underneath it; you're basically at the bottom of where you're going to take the stack at that point.

It "feels wrong", definitely, because you know that that's not how it's done. But it works beautifully and gives you fully deterministic multipurposing of memory.

--
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
 
Email address domain is currently out of order.  See above to fix.
Reply to
Rob Gaddi

I think we're speaking at cross purposes. There are those out there who write lots of lines of code just because they like writing lots of lines of code. My comments were meant to discourage them.

Personally, my customers get code that's (a) 100% written for their specific needs, (b) reused from other, older projects, (c) written for them but intended for reuse (i.e., a comms interface for something I haven't used before), or (d) obtained from a 3rd party (open-source or bought).

(b) and (c) are not written for minimal lines of code for any one project, but rather for robustness over a lot of projects.

--
Tim Wescott 
Wescott Design Services 
 Click to see the full signature
Reply to
Tim Wescott

That would work if you're running a single thread of execution. But if you're running multiple threads of execution (as in an RTOS) then you'd need to provide a humongous block of RAM for each task's thread, or you'd have to police which threads use which functions.

--
Tim Wescott 
Wescott Design Services 
 Click to see the full signature
Reply to
Tim Wescott

C

:) oh please don't. Been a great help Thanks Tim, for all that information.

--------------------------------------- Posted through

formatting link

Reply to
srao

is

it

or

space

in

you

//

C

me

Thanks! Completely understand. So do you feel there is a better way to handle this?

--------------------------------------- Posted through

formatting link

Reply to
srao

this?

heap

store

RAM?

C

who

of

//

resulting

0

the

from

8kB

sue

an

your

that

self-cleaning.

allocation

of

it's

the 80C186 does have 64K bytes of STACK space. Though it actually is more like a single threaded execution, and is also a part of a flash update. If stack is indeed a best way to go, do you mean we would use a local variable to store the whole 8K buffer array?

--------------------------------------- Posted through

formatting link

Reply to
srao

"Best" is relative. Using the stack for this is, IMO, "not clearly stupid".

There are problems. You need to be dead sure that you're not going to overflow the stack; and that includes if you get an interrupt while you're in the flash update routine, which is going to push your stack up as well.

If you've got no interrupts and are completely deterministic in your execution then it gets pretty easy to determine whether you've got a risk: just check your stack pointer when you hit that routine as is, and make sure there's 8K left. Or, better, that there's well more than 8K left because you don't want to run into a problem later.

If you're not deterministic than you need to make sure that even under the worst case execution you're not going to overrun it. I tend to have my initialization code fill the entire stack with 0xDEADBEEF. That way, at any time, I can run a stack check that will give me the high water mark on stack usage, so I can run around stress testing my application to make sure I've got plenty of margin.

Another approach is basically a "monoheap". You block off a static region of memory, similar to what Tim and others have suggested, and you make sure that it only has one use at a time; if your flash update code is using it than no one else is. Then you don't have to worry about malloc/free and fragmentation because there's only one big chunk; you've got dibs on all of it or none.

--
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
 
Email address domain is currently out of order.  See above to fix.
Reply to
Rob Gaddi

You are mixing up the size of the source code (lines of code) and the size of the object code. For high volume products, smaller object code can mean using a part with less flash, leading to real cost savings. For low volume products, you usually should not worry about the code size (until you risk exceeding the available flash for that type of device).

But Tim is talking about source code lines. While there is clearly a correlation between source code size and object code size, there is a wide range of possible scaling factors. Tim recommends avoiding making the source code unnecessarily large or complicated - but also avoiding making it unnecessarily compact. That is, of course, good advice. Unfortunately there are some PHB's that reward very tight code (usually in the mistaken belief that this means smaller object code) - and there are perhaps even more that reward loose and wordy code (in the mistaken belief that lines of code written is a measure of productivity).

Reply to
David Brown

That's my cue for the nit-pick! Your explanation is a good start, but there are a couple of other points that I think are worth mentioning.

It is not clear if the OP is actually using C or C++ (his compiler supports both). But in the case of

const sometype whatever = ;

there is a difference. In C, the "whatever" has external linkage by default - it is visible outside of the current object file, assuming there is a matching "extern" declaration. In C++, "whatever" has internal linkage by default - it acts like "static const", unless there is a matching "extern" declaration visible in the defining translation unit.

(The external visibility does not affect whether the object is put in read-only or writeable memory, of course.)

Personally, I always make my file-level functions and data either explicitly "static" (even in C++, in order to be consistent with my C code), unless it has to be externally visible - in which case there is always an "extern" declaration in the module's header file which is #include'd by both the defining module and other modules that use it. That keeps everything clear and consistent.

And I prefer to write "static const" rather than "const static" - the compiler accepts both, of course, but I am more fussy!

Also note that when you make something "const", the compiler/linker does not have to put it into read-only memory. It /may/ put it in read-only memory, and any attempt to change it in the program is undefined behaviour, but it could equally well use ram. On a PC, for example, it will go in ram - but it will be put in a different segment from read-write memory, and the segment will be marked read only. On an AVR (which has different cpu instructions for accessing flash and ram), const data is usually put in RAM so that it can be accessed using the same instructions as non-const data. We don't know enough about the OP's system to know where static const data would end up.

And of course if the compiler feels it doesn't actually /need/ to put static const data in memory, due to the way it is used (or not used), it will not put it in any memory.

Reply to
David Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.