Serial EEPROM or Serial Flash?

Il 15/06/2018 19:11, Richard Damon ha scritto: > On 6/15/18 11:20 AM, pozz wrote: >> Il 15/06/2018 16:21, David Brown ha scritto: >>> On 15/06/18 15:10, pozz wrote: >>>> Il 15/06/2018 11:25, David Brown ha scritto: >>>>> On 15/06/18 09:38, pozz wrote: >>>>>> Il 14/06/2018 12:49, David Brown ha scritto: >>>>>>> On 14/06/18 12:20, pozz wrote: >>>>>>>> I need to save some data on a non-volatile memory. They are some >>>>>>>> parameters that the user could change infrequently, for example 10 >>>>>>>> times >>>>>>>> per day at a maximum. In the range 100-2kB. >>>>>>>> >>> >>>>>>> >>>>>>> Multiple copies (at least 2) are key, along with timestamps or >>>>>>> counters >>>>>> >>>>>> Two should be sufficient to prevent corruption caused by interruption >>>>>> during the writing of a block of data. >>>>>> Maybe three (or more) are needed to face memory *physical* corruption >>>>>> (maybe for many writings). >>>>>> I think I can ignore this event in my actual project. >>>>>> >>>>> >>>>> You use more for wear levelling. Often your flash chip is /way/ bigger >>>>> than you need - perhaps by a factor of 1000 simply because that's what >>>>> you have on stock, or that's the cheapest device in the package you >>>>> want. Spread your writes over 1000 blocks instead of 2, and you have >>>>> 500 times the endurance. Use a good checksum and accept that sometimes >>>>> a block will be worn out and move onto the next one, and you have >>>>> perhaps a million times the endurance (because most blocks last a lot >>>>> longer than the guaranteed minimum). >>>> >>>> So the writing process should be: >>>> >>>> 1. write the data at block i+1 (where i is the block of the current >>>> data in RAM) >>>> 2. read back the block i+1 and check if checksum is ok >>>> 3. if ok, writing process is finished >>>> 4. if not, go to block i+2 and start again from 1. >>>> >>> >>> Yes. >>> >>> Then comes step 5 (for flash with separate erasing) : >>> >>> 5. If you have written block x, check if block x+1 (modulo the size of >>> the device) is erased. If not, then erase it ready for the next write. >>> >>> Note that it does not matter if the erase block size is bigger than the >>> program block size - if it is, then your "erase block x+1" command will >>> cover the next few program blocks. >>> >>>>>>> and checksums. >>>>>> >>>>>> This is interesting. Why do I need a checksum? My approach is to use >>>>>> only a magic number plus a counter... and two memory areas. >>>>>> At first startup magic number isn't found on any areas, so the device >>>>>> starts with default and write data on Area1 (magic/0, where 0 is the >>>>>> counter). >>>>> >>>>> You use checksums to ensure that you haven't had a power-out or >>>>> reset in >>>>> the middle of writing, >>>> >>>> Only for this thing, you can write the counter as the last byte. If the >>>> writing is interrupted in the middle, counter hasn't written yet, so the >>>> block is not valid (because considered too old or empty). >>> >>> Nope. You can't rely on that, unless you are absolutely sure that you

like >>> that, even if they provide an interface that matches it logically. >>> >>> A common structure for a modern device is to have 32-byte pages as a >>> compromise between density, cost, and flexibility. (Bigger pages are >>> more efficient in device area and cost.) When you send a command to >>> write a byte, the device reads the old 32-byte page into ram, erases the >>> old page, updates the ram with the new data, the writes the whole 32 >>> byte page back in. >>> >>> The write process is done by a loop that writes all the data, reads it >>> back at a low voltage to see if it has stuck, and writes again as needed >>> until the data looks good. Then it writes again a few times for safety >>> - either a fixed number, or a percentage of the number of writes taken. >>> >>> So it is /entirely/ possible for an interrupted write to give you a >>> valid counter, but invalid data. It is also entirely possible to get >>> some bits of the counter as valid while others are still erased (giving >>> ones on most devices). >>> >>> And that is just for simple devices that don't do any fancy wear >>> levelling, packing, garbage collection, etc. >>> >>> >>>> >>>>> and that the flash has not worn out. >>>>> >>>>>> >>>>>> When the configuration is changed, Area2 is written with magic/1, >>>>>> being >>>>>> careful to save magic/1 only at the end of area writing. >>>>>> >>>>>> At startup magics and counters from both area are loaded, and one area >>>>>> is chosen (magic should be valid and counter should be the maximum). >>>>>> >>>>>> I think this approach works, even when the area writing is interrupted >>>>>> at the middle. >>>>>> >>>>>> Why do I need checksum? The only thing that comes in mind is to >>>>>> prevent >>>>>> writing errors: for example, I want to write 0x00 but the value really >>>>>> written is 0x01, maybe for a noise on the serial bus. >>>>>> >>>>>> To solve this situation, I need checksum... but also I need to re-read >>>>>> and re-calculate the checksum at *every* area writing... and start >>>>>> a new >>>>>> writing if something was wrong. >>>>>> >>>>>> Do you have a better strategy? >>>>> >>>>> You calculate the checksum for a block before writing it, and you check >>>>> it when reading it. Simple. >>>> >>>> Do you calc the checksum of all the data block in RAM, including padding >>>> bytes? >>> >>> Yes, of course. The trick is not to have unknown padding bytes. I make >>> a point of /never/ having compiler-generated padding in my structs. >>> >>> So you have something like this: >>> >>> #define sizeOfRawBlock 32 >>> #define noOfRawBlocks 4 >>> #define magicNumber 0x9185be91 >>> #define dataStructVersionExpected 0x0001 >>> >>> typedef union { >>> uint8_t raw8[sizeOfRawBlock * noOfRawBlocks]; >>> uint16_t raw16[sizeOfRawBlock * noOfRawBlocks / 2]; >>> struct { >>> uint32_t magic; >>> uint16_t dataStructVersion; >>> uint16_t crc; >>> uint32_t count; >>> >>> // real data >>> } >>> } nvmData_t; >>> >>> static_assert(sizeof(nvmData_t) == (sizeofRawBlock * noOfRawBlocks), >>> "Check size of nvnData!"); >> >> Why raw8[]? >> >> I think you can avoid raw16[] too. If you have the function: >> >> uint16_t calc_crc(const void *data, size_t size); >> >> you can simply call: >> >> nvmData.crc = >> calc_crc( ((unsigned char *)&nvmData) + 8, sizeof(nvmData) ); >> >> > > I typically also do something like this. The data structure is a union > of the basic data structure with a preamble that includes (in very fixed > locations) a data structure version, checksum/crc, and if a versioning > store a timestamp/data generation number. A 'Magic Number' isn't often > needed unless it is a removable media as it will either be or not the > expected data, nothing else could be there (if the unit might have > different sorts of programs, then a piece of the data version would be a > program ID.) > > Often I will have TWO copies of the data packet.

TWO copies in EEPROM or in RAM?

Do you need raw_data[] array only to set a well-known size to flash_parms? Do you alloc an entire (or a multiple of) flash sector in RAM? Even if your params in flash take only 1/5 or 1/2 or 1/10 of a sector?

Is it really necessary? What happens if you don't care about unused bytes in flash_parms? I think... nothing.

It seems difficult, if the changes from one version to the other are "important".

struct { uint16_t foo; uint16_t bar; uint16_t dummy; } SystemParameters;

struct { uint8_t foo; uint8_t bar; uint8_t dummy; } SystemParametersOld;

union { uint8_t raw_data[DATA_SIZE]; struct { struct StandardHeader header; struct SystemParameters parms; struct SystemParametersOld parms_old; } flash_parms;

for (uint8_t sector = 0; sector < SECTORS_NUM; sector++) { uint16_t memory_address = OFFSET + sector * SECTOR_SIZE; sermem_read(&flash_params, memory_address, sizeof(flash_params)); uint16_t chk_read = flash_parms.header.crc; flash_parms.header.crc = 0xFFFF; if (checksum_calc(&flash_parms, sizeof(flash_parms)) == chk_read) { /* Datablock is valid */ if (flash_parms.header.counter > prev_counter) { /* Datablock is newer */ prev_counter = flash_parms.header.counter; if (flash_parms.header.version == FLASH_PARMS_OLD) { /* How to convert old version in new version in place? */ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ } else { /* Ok, new version. No convertion is needed. */ } } } }

As I wrote in the comment, how to convert old version to new version, using the same union in RAM? Do I need another union?

drivers, >>> polling, etc., so that you can mix fast and slow tasks. >> >> Yes, ok. >> >

Reply to
pozz
Loading thread data ...

As I described, normally both.

One purpose of the raw_data array is to fix the size of the flash data. Another purpose is to provide an easy way to access to pre-'zero' the data Another purpose is to provide an easy way to compute the CRC/Checksum.

Yes, the last two could be done via type casting via void* parameters.

Setting them to what an erased byte is can sometimes save you power and time for programming. Having them be a know value can also make it easier to read the contents, and

should be struct SystemParameters { or typedef struct { you are defining a type here, not a varaiable.

ditto.

should be:

union { uint8_t raw_data[DATA_SIZE]; struct { struct StandardHeader header; union { struct SystemParameters parms_new; struct SystemParametersOld parms_old; } parms; } data; } flash_parms;

I actually don't use labels like Old but like Rev1, Rev2 etc. For some compilers you can use an 'anonymous union' an omit the parms name here and in the code.

You do NOT convert 'in place', but in transfering from the flash_parms block to the main usage block. Only the flash parm save/load routine has access to flash_parms, and they are used just for save and load. If you are tight on memory, it could be allocated on the heap only for load and/or save and freed after, but you then you do need to be sure the memory IS available when you need it. maybe something else turns off for a bit when you need to do it.

sample code:

parms.foo = flash_parms.data.parms.parms_old.foo; parms.bar = flash_parms.data.parms.parms_old.bar; parms.dummy = flash_parms.data.parms.parms_old.dummy;

and for the new code, you do

parms = flash_parms.data.parms.parms_new;

you can do a single line and let the compiler move it all.

Reply to
Richard Damon

Il 16/06/2018 00:25, Richard Damon ha scritto: > On 6/15/18 4:54 PM, pozz wrote: >> Il 15/06/2018 19:11, Richard Damon ha scritto: >>> I typically also do something like this. The data structure is a union >>> of the basic data structure with a preamble that includes (in very fixed >>> locations) a data structure version, checksum/crc, and if a versioning >>> store a timestamp/data generation number. A 'Magic Number' isn't often >>> needed unless it is a removable media as it will either be or not the >>> expected data, nothing else could be there (if the unit might have >>> different sorts of programs, then a piece of the data version would be a >>> program ID.) >>> >>> Often I will have TWO copies of the data packet. >> >> TWO copies in EEPROM or in RAM? > > As I described, normally both.

So you are so lucky that you have abundant RAM.

Sincerely I can't understand why you need a fixed data size in RAM. Suppose release 1.0 needs 2kB, but I'm so smart to provide a double space for new params in future releases. So I decide to use 4kB blocks

*on the serial memory*. Maybe this big space will never be used in all the future releases.

Why the hell I should load all 4kB datablock in RAM at startup? IMHO it is sufficient to load only 2kB.

The only reason I see is to simplify the checksum calculation. CRC must be calculated over the entire fixed-size 4kB block, otherwise how to know the datablock size if it could change from one release to the other? However you can calc the checksum of 4kB block by reading small blocks multiple times (for example, 16 sections of 256 bytes). After CRC validation passed, you can read in RAM *only* 2kB of data that the application 1.0 needs.

I know, maybe this process is slower, but it happens only at startup. During writing of 4kB datablock, the checksum can be calculated on-the-fly.

Yes, indeed.

Yes, you're right.

Of course.

Again you're right.

Yes, it was only for the example here.

Now I understand your process. You have an image in RAM of datablock saved in serial memory *and* you have another "operative" datablock in RAM (maybe identical, maybe differently organized) that is actually used by the code.

I don't like to waste so much space in valuable RAM memory, maybe because I started with very limited 8-bit MCUs where the RAM was very small.

One good compromise could be to change datablock layout with great care, only:

  • adding variables at the end
  • nullifying no-more used parameters by simply changing their name with "_deprecated" (or "_old") suffix (in order to be sure the code will never use them again). Immediately after an upgrade, the "_old" params will be converted in new params, if needed (for example if they change from uint8_t in uint16_t).
Reply to
pozz

If you can't spare the space for the two buffers, then you become forced to 'lock' the system (or at least the parameter table) for the entire time of the flash write, otherwise you end up with corrupted (CRC error) data blocks. It also makes writing the format uprev code much easier. You comment about how to update 'in place' becomes a REAL question which needs to be answered, and that answer will generally require detailed understanding of how the elements are physically stored and making sure that you do things in the right order (which may require limiting the optimizations the compiler can do for that code, as you are likely to do actions the Standard defines as Undefined Behavior).

A big reason for having a copy in RAM being an exact copy of the flash memory is design isolation. It allows me to define a checksum routine to compute the value of an arbitrary buffer, and flash routines that transfer arbitrary data without it needing full understanding of the parameter storage.

As to the use of extra memory, If I anticipate that I am going to possibly need a 4k parameter block in the future, but don't have the ram memory now to read in the 4k block, most assuredly in the future when I do need that bigger parameter block, I am not going to have more ram on THIS processor. Unless the flash memory is a removable media (which presents a totally different set of issues) there is no need to reserve the extra space for this processor.

Yes, for VERY small processors this doesn't work. But programs on very small processors tend to have small parameter blocks and flashes with small sectors (at the very least you are going to probably need at least one memory block reserved the size of a flash sector, and the flash_parms structure can fit in that.

And yes, most of the time when you revise your needs in the parameter block, you add new items at the end, and if you stop using an item, it gets renamed with an distinctive suffix. When possible I try to maintain it having a reasonable representation of its value, so if you 'downrev' the code, it gets something reasonable. I try to use a semantic versioning method for my flash version numbers, so that code can decide if the parameter values are usable. If the major revision number is bigger than what it knows, it will refuse to use the parameter table (and either throw an error or revert to defaults). If the major revision matches, and the minor is greater than or equal to what it has, it just uses the parameters as is. If the major is equal and the minor is lower, after copying the parameters, it adds appropriate defaults (maybe computed from other items) to any items that have been added since the indicated revision. If the major revision is lower, then the code needs a full conversion routine as shown above (or throw the error/go to defaults for too old like too new).

The changing of the type of a parameter is one of the cases where you are forced to bump the major revision, or you create new parameters in with the new type, and the code does its best to give the old, now obsolete, values as reasonable a value as possible in case of a down rev.

Reply to
Richard Damon

Why? You have a single datablock in RAM that must be transfered to the serial memory when changes occur. You start an activity (a task) that performs the copy in background. If new changes occur during writing, the task can be restarted without problems, even on the same memory area. I hope the changes aren't so frequent so finally you will have a valid datablock (with correct CRC) written in memory, without blocking the main application.

Why do you need an additional datablock to avoid blocking task?

Yes, solving the problem of converting an old layout in a new completely different layout is very complex without an additional structure. The in-place conversion would be very difficult and depends on the changes.

I think you can avoid all this stuff, as I said, changing the datablock layout with care (adding parameters at the end, without changing the first/old layout).

Yes, your earn some cents on implementation of some functions.

When you reserve 4k block for future uses, you don't know if it will be REALLY used in the future. You are right when you say "no space now, no space in the future", but are we sure we can know if we have space or not?

I don't know if my approach is correct, I usually don't use heap memory, so I have only static data and stack. It's very difficult to calculate how much space you need for stack to avoid stack overflow in ANY condition. So I end up to reserve all the free RAM space to stack, where the free space is total RAM minus static allocation.

So I try being parsimonius when allocate global static variables, mostly if they are big. Reserving some space for extra not used memory will sacrify stack space.

Maybe the stack space is sufficient even with extra padding in RAM, who knows. At first I try to do all my best to avoid useless big static allocation in RAM.

At the contrary, what happens if you notice that your stack is critical when padding is in RAM? Do you prefer to reduce the datablock *padding* size or to recode some functions to avoid having the padding at all in the RAM?

I think there is another approach to keep checksum functions simple and avoid having padding space in RAM. The problem here is the space where CRC is calculated. At startup we don't know the size of data, because it could belong to an old or new layout, so we can't calculate CRC on the right area. We should know the data layout version before calculating CRC, but version code is *in* the CRC area.

We could think to pull the layout code outside the CRC area and protect it by duplicating.

typedef struct { uint8_t layout_version; uint8_t layout_version_check; uint16_t crc; /* The CRC is calculated starting from the following byte */ uint16_t counter; union { nvmdatav1_t datav1; nvmdatav2_t datav2; }

At startup the layout version is read and checked. Then CRC is read and checked against the correct size that depends on the layout version. The CRC checksum is simple.

if (datablock.layout_version == datablock.layout_version_check) { size_t s = datablock.layout_version == 1 ? sizeof(nvmdatav1_t) : sizeof(nvmdatav2_t); if (datablock.crc == calc_crc(&((const unsigned char *)&datablock)[5]), s) { if (datablock.counter > prev_counter) { /* New valid datablock found */ } } }

No padding at all (in RAM and in serial memory), simple functions.

Of course padding must be taken into account during writing to separate the blocks in memory for redundancy and wear-leveling.

Good suggestion.

Reply to
pozz

Il 15/06/2018 16:21, David Brown ha scritto:

I was thinking on this "long-running" task that writes non-volatile data to serial memory in the background, during normal execution of the main application.

First question. If I understood well your words, the checking state has the goal of comparing data written (by reading them) against original data in RAM. If they differ, we should go back in writing state, maybe changing the destination sector (because the write could be failed again for physical damage of that sector).

Another question is: what happens if saving is triggered again during writing? I think there isn't any big problem. The writing task can be started again from the beginning, even on the same destination sector. So during writing and checking states I should check for writeTriggered flags again and prematurely stop the writing and start again from the beginning.

Third question, more complex (for me). Suppose I decided to split non-volatile data in two blocks, for example calibration data and user settings. What happens if user settings change when calibration settings are being written? I think I should convert your writeTriggered in two updated flags: calibration_updated and settings_updated. In idle state I should check both flags and start writing the relative block.

Again another scenario. Until now we talked about settings, a structure filled with parameters that can change when the user wants at any time.

How to manage a log, a list of events with timestamp and some data? Suppose one entry takes 8 bytes. I reserve 4kB of memory for around 500 entries organized in a FIFO.

Log isn't so critical as the settings, so I think I could avoid redundancy in non-volatile memory. Maybe only the CRC that, when not valid, clears all the log. It should be acceptable.

As usual we can talk about the opportunity to read the full log and put it in RAM, or read the entries when needed (because the user wants to read some entries, mostly the more recent). Reading 10 entries (80 bytes) from 10MHz SPI memory doesn't take too much (no more than 100usec). But here the problem is that reading can be needed during writing of settings. And this is a big problem.

As usual, simplest solution is to have the full log in RAM... sigh!

What about writing of one or a few new entries in the log? A writing operation (for example, settings) should be in process. I should schedule and postpone the log update after writing of settings is finished.

Because you have much more experience than me (and you are so kind to share it with me and other lurkers), could you suggest a smart approach?

Reply to
pozz

First, often in the parameter block for me are cumulative usage stats, so the parameter block will be automatically saved on the controlled shutdown (like the use flipping the off switch) and maybe automatically after sufficient or significant usage. Yes, I can lose some usage on an unexpected turn off (pull the plug), but some units may even have enough power reserves to save in this situation.

Yes, you can put off the need with controlled changes, but sometimes you just build up enough cruft, that a re-layout makes sense. Maybe on the really small processors, your tasks are simpler and more stable. I find I want to allow for the possibility that at some point the structure may need to be rearranged, and often I want a unit to be able to be upgraded and preserve its settings.

I follow roughly similar guidelines. The one exception is that I allow the use of dynamic memory that is allocated during 'startup'. Also, I find it useful to use a lightweight RTOS, so I don't have a single 'system stack', but several task stacks, so I do need to figure out the space requirements and can't just say it gets everything. You do run tests and check how much of the allocated space is used (stack space is prefilled with a funny pattern, and you see how much remains with that pattern.

When RAM space gets tight, you need to sit down and look at ALL RAM utilization and compare it to your initial estimates. You need to do this sort of estimate at the very beginning of the project so you know it should be feasible to begin with. You need to be able to flag that there is a serious risk in implementation before the project gets too far, and perhaps re-target to a better processor.

There is no problem with the version code being inside the CRCed area. In fact you really want it checked as if something corrupted the version code, you really want the block invalidated. The version should be in the early header (so parm changes can't affect it, one reason I define a separate struct for it that is put in the flash_parms structure. It also says that the flash parameter saving can be a canned routine that defines that structure and uses the parameter definition from the application code.

I suppose this is one reason I don't want the size to change, at that point the 'canned' code has to be taken out of the can, as it need more application knowledge to know the parameter size. The way I do it, I can have a parms.h that defines the ram parameter structure, and a #define for the size of the block to use in flash for it, and the canned routine does most of the work to find the best saved block and to save the parameters to flash on command. It does need a callback into the application layer with the block to use to provide the application layer up-rev of the parameter data.

Reply to
Richard Damon

Il 16/06/2018 22:41, Richard Damon ha scritto: > On 6/16/18 1:38 PM, pozz wrote: >> Il 16/06/2018 15:20, Richard Damon ha scritto: >>> On 6/16/18 2:26 AM, pozz wrote: >>>> Il 16/06/2018 00:25, Richard Damon ha scritto: >>>>> On 6/15/18 4:54 PM, pozz wrote: >>>>>> Il 15/06/2018 19:11, Richard Damon ha scritto: >>>>>>> I typically also do something like this. The data structure is a >>>>>>> union >>>>>>> of the basic data structure with a preamble that includes (in very >>>> fixed >>>>>>> locations) a data structure version, checksum/crc, and if a >>>>>>> versioning >>>>>>> store a timestamp/data generation number. A 'Magic Number' isn't >>>>>>> often >>>>>>> needed unless it is a removable media as it will either be or not the >>>>>>> expected data, nothing else could be there (if the unit might have >>>>>>> different sorts of programs, then a piece of the data version would >>>> be a >>>>>>> program ID.) >>>>>>> >>>>>>> Often I will have TWO copies of the data packet. >>>>>> >>>>>> TWO copies in EEPROM or in RAM? >>>>> >>>>> As I described, normally both. >>>> >>>> So you are so lucky that you have abundant RAM. >>> >>> If you can't spare the space for the two buffers, then you become forced >>> to 'lock' the system (or at least the parameter table) for the entire >>> time of the flash write, otherwise you end up with corrupted (CRC error) >>> data blocks. >> >> Why? You have a single datablock in RAM that must be transfered to the >> serial memory when changes occur. You start an activity (a task) that >> performs the copy in background. >> If new changes occur during writing, the task can be restarted without >> problems, even on the same memory area. I hope the changes aren't so >> frequent so finally you will have a valid datablock (with correct CRC) >> written in memory, without blocking the main application. >> >> Why do you need an additional datablock to avoid blocking task? >> > > First, often in the parameter block for me are cumulative usage stats, > so the parameter block will be automatically saved on the controlled > shutdown (like the use flipping the off switch) and maybe automatically > after sufficient or significant usage. Yes, I can lose some usage on an > unexpected turn off (pull the plug), but some units may even have enough > power reserves to save in this situation. I see, so in your situation the usage stats parameters changes more frequently than the writing time of datablock. So you need to freeze a datablock for saving, but allowing the application to continue updating it.

As usuale, different requirements lead to different solutions.

Before looking inside the CRCed area, I think it would be better to check the CRC itsel.

At startup the 'canned' code reads the size and check the CRC, without having to know much more details from the application.

Reply to
pozz

No, you can't necessarily do that. C has rules about what pointers can alias what data in what ways - so the compiler knows the data inside the struct is independent of a pointer to an uint16_t (such as you might use in the crc function), unless they are part of a union. In many cases, the compiler can't actually make use of such information for optimisation - so code messing about with pointers of different types will work. But sometimes it won't, especially with higher optimisation choices and link-time optimisation. It is much better to get in the habit of doing things correctly - if you want to access the data as

16-bit or 32-bit blocks (for checksums, for copying data, for passing in bulk to external memory, etc.) then use a union here. It might not be necessary, depending on the rest of your code (and by C's alias rules, the 8-bit raw is not needed), but I'm showing the principle here.

That depends on the application. I'm not doing /all/ your work for you :-)

Keeping some extra space for the future is a good idea. How much you use is a matter of taste.

Yes.

Reply to
David Brown

Yes, basically. It is up to you how you handle things if the ram copy can be changed underway. You might use a second copy in ram for a check, you might ban changes during the write/check period, or you might simply run a checksum on the written data and check that it matches.

Usually the most important thing is that the data stored is a consistent snapshot of the data structure, rather than the most recent version. So you should continue saving what you are doing before triggering a new write. But be very careful if you allow writing to the ram copy of the data while a write is in progress.

Sure, have as many blocks as you like.

Have part of your NVM chip reserved for the logs. Log in blocks, with a crc on each. Don't bother holding more than two log blocks in ram (one being stored to NVM, and one for updating at the moment).

Reply to
David Brown

I will typically divide my parameters into two groups. One group has the data that the user sets, usage information, and other information that is updated as the user works. This data gets saved shortly after the user makes a setting change or enough time passes that the other information is worth saving. I often also include a user option to reset this to some 'factory default' for if the user totally messes up the settings (this won't reset the usage data, just user settings). There is a second block of factory calibration data. This will never be update by the user (or only by very trusted users) and typically this block doesn't have multiple copies (unless I need a backup for actual flash corruption). To activate the save for this block requires giving the device a special unlock sequence, which allow the adjustment of these parameters and then a specific factory save command.

For logs, I will define a log information block to store a single log entry, back the number of them I can into a flash sector. The total log then has a number of these sectors reserved for the log, forming a circular list (so writing a new log entry overwrites the oldest log record). I tend to have two sectors of these log entries 'cached', so I can be creating one log entry at the end of one block and one at the beginning of the next block. While I am filling a given log entry, it is marked as 'invalid', and that is cleared when the entry is finished. A given sector is written when it is full, or a sufficient time after a block has been updated to minimize log data loses due to power loss. This write uses the same flash buffer as the parameter flash buffer as I can't be both writing a log sector and a parameter sector at the same moment.

Reply to
Richard Damon

Il 17/06/2018 19:27, David Brown ha scritto:

My Thunderbird has a Reply button that sends an email to the sender and not to the groups. I have to right-click and choose "send to group" explicitly. And it sometimes happen I forget.

Good suggestion. Maybe I missed to explain that the user could want to read the log. If I don't keep *all* the log in RAM, it is possible that the application needs to read the log from the memory chip... this is against our first assumption to read all the data at startup and keep it in RAM to simplify writing without blocking.

The function that needs to return one or more entries should read from the memory chip... however it can be busy in writing. One possibility is to block while waiting for the end of writing, but blocking tasks aren't good. Another is to change the function to be asyncronous...

Reply to
pozz

Do your application read log entries? What do you do to avoid reading when the memory chip is busy in writing?

Reply to
pozz

If it is an external flash (or an internal flash that a write blocks reading), then when the application asks for the block it will block on the Mutex guarding the device. One big reason to use a pre-emption based system. Normally, reading of log entries is only done in response to an external command

Reply to
Richard Damon

In my cooperative kernel, I have two choices: - block the entire application waiting for serial memory availability * for 24LC64, maximum the page-write time, max 5ms * for a serial Flash, maximum the sectore-erase time that is too much - convert the code in a state-machine, sigh... :-(

I usually work on a bare-metal system.

Reply to
pozz

And that is one of the issues YOU need to solve when you drop down to cooperative/bare metal system. What to do if you want to do something but can't at the moment. You need to design in ways to effectively use the wait time, and yes, that often means things like state machines, and that often means that the base level of an operation needs to know when some sub part isn't ready to do its thing.

On very small machines, the code isn't that complicated (being limited by the processor's ability), the bare metal approach isn't that bad. As the machine gets bigger, normally because the task has gotten more complicated, then the bare metal, hand crafted cooperative system starts to get heavy, so you 'upgrade' to a pre-emptive micro-kernel. (and if the problem get enormous, maybe you upgrade to a large scale processor running a full embedded os).

Reply to
Richard Damon

You should consider the use of a FRAM chip. You can get SPI or I2C versions. Endurance almost becomes a non issue. Supports byte by byte write and write speed is pretty much the same as reading - with the serial interface the write speed is pretty much hidden in the interface timing.

I use them for NVM storage and it is possible to create a robust parameter and settings system around FRAM. My general concept is to store data in blocks two times with CRCs. The CRCs allow for checking at load time if to use the first or second stored image. If both CRCs bad then initialize to defaults.

With the beauty of byte writes I have my driver setup where I keep two copies of the data set in RAM. One matches the stored content and the other copy is where changes are made. At time of the write commit to NVM I only write the bytes that have actually changed, This drastically reduces the amount of time spent storing a data set back to the FRAM when only a few bytes have changed.

--

Michael Karas 
Carousel Design Solutions 
http://www.carousel-design.com
Reply to
Michael Karas

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.