Embedded RTOS - features poll

By jiffy I assume you are referring to Linux ?

Originally Unix was just an other time sharing system.

Fortunately Linux also has a purely priority based scheduling, otherwise it would not be usable for any real work :-).

There is something wrong in an RT system design, if you put several tasks on the same priority.

The first time I heard the expression "priority inversion" was when NASA had problems with it on a rover on Mars. Before that I have made multitasking systems for decades, without similar problems. Of course, maintaining a clear data ownership and for small data entities atomic access helps a lot.

The first rule of RT system tuning is to find jobs that could be moved to a _lower_ priority.

Unfortunately, most people seem to intuitively try to increase the priority of some "important" task :-(.

Reply to
Paul Keinanen
Loading thread data ...

The "system tick" is often referred to as the "jiffy". Since most MTOS's only support a single "clock", this tends to also drive the scheduler.

Do people actually *do* work on Linux? Seems most folks work on Linux *itself*! :>

If there is no inherent relationship between those tasks, then how do you prioritize them? If I have a box "running my home", what are the *relative* priorities of the HVAC controller and the irrigation controller and the burglar alarm?

Yes. I always make sure that I remember the role of an OS is to make my (developer) life easier by taking care of "things" for me (that are either too boring to attend to *or* too *tedious*). As systems get more complex -- especially systems that *grow* -- it gets hard to keep track of all the interactions between (shared) objects and entities. A good candidate for an OS service!

*EXACTLY*! I recently posed a related question regarding how people tweek things and the psychology employed. E.g., if you are doing N things on your workstation ("PC") and are particularly interested in one of them, how do you "bias" the machine to expedite the "task" (activity) that you are focused on? We tend to "kill" activities that we consider least important in the hope that this will help the *desired* activity proceed faster. But, we don't even consider how many resources are being used by that "killed" activity when we make the decision to terminate it. Wouldn't a more meaningful interface (in this example) be a facility that lets you *elevate* an activity's importance (nice -1000) instead of having to *de-emphasize* everything else? [note that this is the exact opposite condition from what you are describing -- sort of. :> In your scenario, the developer should have been disciplined to run each task at "the lowest possible priority". In the workstation example I posed, the user doesn't think in terms of how he might want to refine his priorities ex post facto while he is "creating" those "activities"]
Reply to
D Yuniskis

That should not be hard.

Anyway, you may have to divide each tasks into several subtasks with different priorities. Anyway, each task should not need to run for more than a millisecond or in the "running my home" example for more than a minute.

Some of these issues are really high level architectural decisions. If done improperly, you end up with lot of locking requirements etc.

In the 1980's I was running a department making control systems based on PDP-11/RSX-11 systems. These machines had a program addressable space of 64 KiB and a maximum physical memory of 256 KiB or 4 MiB.

Once I got the invitation to tender, I split the problem into tasks, decided about what data should be owned by each task and how they are going to communicate with each other, assign preliminary priorities, thinking about which person in my team should do each task and after that start to think, how long it takes to make each task before writing the tender.

We never had problems fitting each program into 64 KiB and the projects were much better within budget than other departments working with the "unlimited" VAX addressing space.

Reply to
Paul Keinanen

Hi,

thanks for your resp> Stargazer expounded in news:67fc4024-eeeb-4fe5-a1cf-

I think it will depend on projects. I currently have support for ARM9, MIPS32 and x86, MIPS64 was barely seen to work on emulator; MSP430 (16- bit) is likely to be supported, PPC 32/64 bits are possible, but I have to understand the requirements in the field that they are used.

Projects that are built around 8-bit MCUs have even more "custom" needs than stronger CPU-based. I do most of the work for 32/64 bit CPUs, so I know many questions in their application field that current OS offer doesn't answer (or is not known to answer). I understand 8- bit MCUs field less (mostly evaluations and stories where projects switched from 8-bit MCU to more capable due to increasing requirements), it seems to me that projects based on them count every feature against cents of cost. I think that for 8-bit MCU you may only be interested in memory (?) / task (?) management, flash interface and chip support library. All this is easily isolated from my OS, but I have to do a real 8-bit project to verify what issues I may have there.

In principle, there is nothing that prevents my OS from porting to 8- bit MCU without MMU.

Probably I can do that, but I don't see how this implementation could be included in common OS code. Implementation will be grossly different between 1K, 2K and 4K of RAM, and it will probably not work on USB disks at all. It may be used as reference code to recall ideas, but I don't even see how this can be used for other project for anything else than 1K requirement.

I take Linux as scalability example, it scales well from relatively resource-limited to multiprocessor high-end systems. I believe that the most natural application field for my OS is small to medium resourced 32-bit and 64-bit systems (higher-end systems usually have less performance / footprint problems with Linux, and as of now it will be hard for me to compete in features offer due to architecture differences - porting of open-source code from Linux to my OS is not entirely smooth). Things may change, however.

Due to my own (in many cases painful) experience, developer that uses an embedded OS must be provided with complete buildable source code. Such an OS must help to developer with its ready features, interfaces and frameworks, but not prevent him from understanding how it works in details and changing it to suit his needs. I am not sure as of now if it will be "free beer" itself or not, and what restrictions will be suitable if customer wants to massively redistribute it (I don't want it to absorb too much third-party perceptions).

Well, I'm not very new to engineering (almost two decades). However, I do believe that "more features" is "better" (just that if you don't need something it shouldn't be there for you to stumble). When I meet with a client and they say that they want to do this and that, I first of all evaluate what software offer exists, and what features we have ready. Usually, if not enough features can be readily collected, the project never leaves requirements stage.

Well, features that I meant are something different. Examples:

  • Priority-based task scheduler
  • Standard C library
  • POSIX I/O
  • pthreads
  • TCP/IP stack
  • PCI host and enumeration support
  • telnet server
  • HTTP server
  • CPU 'x' port
  • Platform 'y' support (chip1, bus2, etc.)

etc.

IMO, lists are not part of OS feature list. E.g. I use softed lists to implement installable timers and some other things, but if you don't use all that and don't use them yourself, then they are not in your build and not affect your footprint. The same goes for what IMO are OS features.

In principle, there's nothing that prevents from using my OS with custom task manager. There are some things that need to be done, however, as drivers, sockets, timers atc. use native task manager API to put tasks to sleep and wake them - all that needs to be changed to work with a different task management system. Source code will be handy.

I think that microkernels and their derivatives didn't prove themselves. They possible designs, they are better in some theoretic issues, but they solve real needs worse than what is currently popular. Just my opionion, but industry seems to back it.

Somebody here suggested looking at academia - as of academic offsprings TRON got to the most developed stage, but it follows "classic" (single-address space, single binary, monolithic kernel) embedded OS design.

From what I know, most RTOSes, including the most well-known (VxWorks, pSOS, LynxOS) were developed in the same way that I'm going to do. I already put in some thought, but don't see something better. I think that other custom OSes that people write (individuals and companies) follow the same way, eventually stopping or settling at some point along the way. It's hard to tell why the "well-known" guys got to the point of extremely low quality and selling non-working features that got the embedded OS field where it is today - they won't tell me.

Thanks, Daniel

Reply to
Stargazer

Vladimir Vassilevsky expounded in news:oeidnTWnfd1fMgrRnZ2dnUVZ snipped-for-privacy@giganews.com:

Indeed!

Of course it is limited, but hardly stupid. You don't need a linux kernel on a 32/64 bit platform doing data logging.

You can of course accomplish this with a little 8-bitter, logging to a FATFS on a stick. It simply takes a little planning.

Warren

Reply to
Charmed Snark

Of course. I wasn't posing a real problem but, rather, illustrating that (sometimes) tasks don't have a clear relationship to each other in terms of priorities.

The trick in most (almost all??) multithreaded applications is data sharing (or hiding) and communications. This is the first thing that I look at when facing a new design or evaluating an existing design -- if either is poorly thought out, you end up with lots of extra "work" (i.e., code) being done to compensate.

The real challenge is figuring out how to come up with architectures that support growth and revision without unduly penalizing them in their initial/current form. And, doing this in a deterministic manner for real-time applications is doubly so!

Smaller, generally, *is* better. Make threads inexpensive so the user isn't discouraged from using them liberally. I think the same philosophy should apply to all OS mechanisms for similar reasons -- if something is "expensive" (architecturally), then developers will tend to avoid it... perhaps even when they *shouldn't*.

A consequence of this (IMO) is that you should offer the minimum set of features required to provide the needed complement of services. I.e., don't do the same thing three different ways; pick one and use it exclusively (e.g., use *one* type of synchronization primitive) so that you can optimize *that* implementation without having to accommodate other variations.

Reply to
D Yuniskis

s.

I wouldn't try to knock his work too much. FreeRTOS is very widely used and is looking a bit like a defacto standard. The SafeRTOS is his paycheck and I am sure it is a bit harder to use "safely", so it is much more likely that others will need his services to help with their application. I can't find any real fault with what he offers or how he offers it. I only wish I had something of this utility to offer the world (and make an income from).

Rick

Reply to
rickman

If you mean logging to a USB memory stick, that will require your 8-bitter to implement a USB Host interface.

My approach has been to use SD or SD micro cards with an in-house sequential file system. It doesn't need an RTOS, just an interrupt handler to collect and queue the data and a main loop to pull from the queue and write to SD. That works up to about 400 16-bit samples per second on an average current around 10mA using an MSP430. I don't think you can run linux effectively on 10mA at 3.3V.

Mark Borgerson

Reply to
Mark Borgerson

At least a part of USB host to support for the basic read/write operations on the mass storage device.

We used simple sequential file system also, despite of the obvious limitations. Later we switched to FAT with POSIX API, and it was so much better. There is no need for Linux; a standalone full featured multithreaded FAT takes ~50kb + buffers. The power consumption is determined by the number of transactions per second rather then the memory size.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

Mark Borgerson expounded in news: snipped-for-privacy@news.eternal-september.org:

Nope. Why bring USB into it? Just use a SD memory card. The interface for it is trivial.

Fine, but that is not user friendly to the user receiving the SD data. Now he/she needs a special app to pull it off the stick.

OTOH, you can use minimal FATFS software to create and write log file(s).

Anyone can just write sectors to the SD without a FS, but it is terribly inelegant.

Warren

Reply to
Charmed Snark

I agree. If you are writing to flash or SD cards, the current drain during a sector write may be 10X that used just to acquire the data.

One problem I've run across with very sensitive analog systems is the EMI and power supply transients that occur when the SD card writes a sector.

Mark Borgerson

Reply to
Mark Borgerson

The problem with a truly minimal FAT16 or FAT32 FS is that you either have to cache the FAT and directory entry or you get two or more sector writes for each file write operation. That may be OK if you have very large RAM buffers and only write once every ten seconds. It doesn't work well with

2K or so of RAM.

For FAT32, another problem with simple implementations is that writing a new cluster may require traversing the fat to allocate the next open cluster. Avoiding a long sequence of sector reads can require either a modification to the simple FS, or a large RAM buffer. That problem killed FAT32 for when my SDHC files got up to about 200MB.

This particular logger has to sit on an oceanographic mooring and collect 15 channels of data at 100Hz for about 4 to 6 months with internal batteries. Even with lithium primary cells, the power budget only allows about

20mA for the logger and sensors.

True. That's why I wrote a simple sequential file system. The logger comes with a host program that knows how to upload smaller files via USB and can handle the FS when the SD card is plugged into a PC. The files are in binary format, so some processing program in inevitable.

Mark Borgerson

Reply to
Mark Borgerson

you can certify a bowl of spaghetti as just the same as you can certify sushi -- it does not mean they are both automatically suitable

Reply to
bigbrownbeastie

Mark Borgerson expounded in news: snipped-for-privacy@news.eternal- september.org:

Not exactly. You only search and update the FAT when you're adding clusters. Yes, you need one at the beginning of a new file, but thereafter, it depends upon your sectors/cluster parameter. Only if this value was 1, would every [sector] file write require diddling the FAT again to add a new cluster. Sectors per cluster are usually 8 or higher (especially higher if the FS is large).

This is a problem indeed, if you always start from the beginning of the FAT table. However, an uncomplicated way to improve this is to make note of the lowest available cluster number and store _that_ in a 32-bit variable for the FS. Then, the _next_ time you go hunting for a free FAT entry, you only need to start from the FAT sector containing the free cluster entry (which is easily computed).

If performance in writing is needed and you wanted to retain the advantage of FAT, I think there are possibilities. For example, you could preallocate all your clusters for your log file at the beginning, since you know the sample rate and duration. Your FS software would simply need to provide a "rewrite" file sector function as samples are written out.

Other alternatives could include the building of a list of free clusters up front. As long as no other FS cluster allocating operations are going on at the same time you could simply log your data to those clusters directly. Then another task, could link them together into a file, on a lower priority basis, ending with and perhaps periodically updating the directory entry for the involved log file.

Warren

Reply to
Warren

That is probably the best way to handle it if there is no capability to delete files. If you can delete files, you can end up with the free FAT entries scattered all through the table. The next available FAT entry might be at a lower entry than the last used entry. When that happens, you're back to traversing the FAT.

I've considered that option. On a 16MHZ MPU with SPI SD access, it can take quite a long time to pre-allocate all the clusters on a 32GB SDHC card.

I actually wrote a PC application to "clone" such a pre-allocated SD card using direct device writes to "format" the SD card. It turned out to be more trouble than it was worth.

If you have all of the 32GB card appearing as one large file, you would then need an app to break out the data into smaller chunks for analysis.

That sounds like quite a complex scheme for an MSP430 running at 16MHz and collecting 2KBytes of data per second for months. Those low-priority tasks might also have significant power consumption.

Mark Borgerson

Reply to
Mark Borgerson

Mark Borgerson expounded in news: snipped-for-privacy@news.eternal-september.org:

Deletes don't present a problem. The only time the effect of a delete becomes critical is when you're nearly out of space on the volume. But the algorithm I described still works without complications.

Let's say you've allocated the last cluster near the end of the FAT. Now your remembered lowest available cluster number is zero because the FAT search didn't encounter a next free cluster. Cluster numbers start at 2, so zero is a good way to indicate 'no cluster'. The current cluster allocation has been satisfied, but the next "might" fail, but..

Then a file gets deleted and say it frees up 2 more clusters.

The next attempt to allocate a cluster will see the next available cluster number as zero (none). This indicates the need to start at the beginning of the FAT. If the FAT search (from the beginning) still comes up empty, then you know you're out of space. But in this case you'll get the lowest freed cluster from the last file delete.

So deletes don't complicate this. There is no rule that says you have to allocate the lowest numbered cluster.

If you wanted to get fancy, you'd actually prefer to keep clusters close together for a given file (if it were a physical disk). But as you know, this is hardly necessary in embedded applications and unnecessary for a memory based FS.

Surely this is a small startup time, compared to months of data capture. You can do this in less time than it takes to format the card.

I don't see it being any more trouble than a format operation. If you leave it to your MCU to do (at startup), then it need not be part of the human procedure.

That is your choice in application design. You have the same issue whether the data is physically stored or in a file system's "file".

It is complex, but an option. Given that you have months of time to chain together a file, I don't think you're time challenged. Of course it will contend with the SD/mmc device at some level and this may introduce unacceptable latency, which must be considered.

Pre-allocating is definitely simpler and could be done after a RESET, after opening the FS on the SD/mmc card.

Warren

Reply to
Warren

If the deleted file happens to be somewhere in the middle of the SD card, you may have to scan a lot of FAT to find that free cluster.

That's correct. However, that is the way it was done in the simple FAT system I started with.

It could be small in relation to the months of logging, but large in relation to the other tasks needed to prepare the system.. On a FAT32 system, the FAT can be as large as about 16MB. That's about 32768 512-byte sectors. On a slow system, you might only be able to write 15K or

30 sectors per second. That means that formatting the SD card might take 15 minutes. Multiply that by 4 cards, and you have about an hour to format the storage on the logging system.

That's not a bad idea. The formatting of the disk could be part of a burn-in procedure at initial build.

The only other downside is that you could lose 16MB (or 32 if there are dual FATS) of storage space. That's not a large factor, though.

That might be the largest problem. The MSP430 doesn't have a lot of spare RAM to buffer data if some other process is blocking access to the SD card.

Given that it might take 15 to 60 minutes, It probably can't be done on each power up or reset. That's particularly true if the FS has to retain calibration data over multiple logging sessions.

Mark Borgerson

Reply to
Mark Borgerson

Mark Borgerson expounded in news: snipped-for-privacy@news.eternal- september.org:

Simple optimizations don't guarantee that you always save a long search. ;-)

You can of course, make a slight improvement by noting that if the saved available cluster is currently zero and you're deleting clusters, make note of one of them. Then at least you skip some of that overhead, in the next allocate cluster call.

But but.. on a memory card, unlike a floppy, you don't need to write _every_ sector. You need to write the boot sector, root dir sector(s) (FAT16), and _one_ FAT table. If you're formatting FAT32, then you also need to allocate one cluster to hold the start of the root directory and fill that cluster with zeros. All remaining clusters can be left in "as is" condition.

So if you choose your sectors/cluster paramter wisely and choose 1 FAT table, then you only need to write a very small fraction of the whole FS image.

For floppies, the format was also a phsical sector operation in addition to setting up the FS. But a memory card doesn't need this.

The only reason I can think of for doing a full format on a memory card, would be to discover any bad sectors. But this is something that the FS software might be able to do on the fly, if it is designed for it.

I don't have any evidence on how often memory cards have bad sectors. But an up front screening process could eliminate unsuitable cards. It really comes down to how critical your data capture application is. If it _is_ indeed critical, then you don't have any choice in this matter anyway- a full format will be required.

You only need 1 FAT - so format it that way. Problem solved.

Multiple FATs were designed for flakey floppy disks. Hardly necessary on a memory card, unless you intend to use it close to the flash rewrite limits. But that wouldn't be prudent for data logging purposes.

True, and just running another "task" requires allocating a separate stack etc.

I assumed continuous operation, once started. Obviously, if it must be restartable, then perhaps it could be done only as part of some special startup handshake, like a button press.

Warren

Reply to
Charmed Snark

Are you assuming that the card has never been used and is starting with 0xFF in all sectors? I suppose that would be fine if the system was never reused or tested before being deployed.

If you're using an SD card, doesn't the internal controller in the card handle that (as well as wear leveling)?

Mark Borgerson

Reply to
Mark Borgerson

Mark Borgerson expounded in news: snipped-for-privacy@news.eternal-september.org:

...

It doesn't matter. Why would it?

When you allocate a new cluster to a root/sub-directory, you zero it then (zero is the 'end of directory' marker). When you allocate a cluster to a file (being written and extended), you just make the cluster available - the application just overwrites the garbage in those sectors.

The flash controller takes care of erasing the "pages" of memory and "rewrites" the sector as needed. To the end user, it just looks like a disk.

Warren

Reply to
Warren

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.