Foreground vs. Background (Nomenclature Question)

I doubt many people who frequent this newsgroup consider a PC without a screen or a supercomputer an embedded system.

Reply to
Andy Sinclair
Loading thread data ...

Why not? If it's embedded in something and running nothing but special purpose software, I consider it an embedded system.

--
Grant Edwards                   grante             Yow!  The PINK SOCKS were
                                  at               ORIGINALLY from 1952!! But
                               visi.com            they went to MARS around
                                                   1953!!
Reply to
Grant Edwards

Ditto - Ditto.. for 27 years.

Reply to
TheDoc

We've had this conversation once or twice before ;).

ISTR that we broadly agreed the following definitions: - embedded product: microprocessor/microcontroller-based bespoke PCB - embedded system: embedded PC-based

(I deal almost exclusively with the former.)

Steve

formatting link

Reply to
Steve at fivetrees

That is my view also. The embedded projects I have worked on range from 8048 microcontrollers, to CDDI clusters of 14 "industrial grade" rack mounted Pentium PCs with two X-terminals, with a few Vax and other minis in between. They were "embedded" in a larger system, and the software would have been useless without the hardware to which these computers were connected.

Reply to
Roberto Waltman

While I've done a lot of designs around M68K systems (and other systems with NMI interrupts), I've NEVER connected NMI to a peripheral that had no facility for disabling the pin connected to NMI. That may be bypassing the spirit of the NMI, but I generally consider the power plug or reset button to be the only truly non-maskable events. I can envision safety-critical systems where NMI would be useful. I cannot envision a system where that interrupt could occur without some hardware initialization by the foreground routine.

I suppose that would depend on how the interrupts were controlled. When I use interrupts, my 'foreground' code has to first set up the peripherals to allow the interrupts to function. Depending on the complexity of the interrupt hardware and software, the ISRs may not allow nested interrupts---which is certainly control of a sort.

For example:

In an instrument which is processing analog input and presenting data to the user, the foreground program accepts a command, Sets up the ADC to collect the data in an ISR. While the data is being collected the foreground routine monitors the user input (for a 'cancel' command, etc.), checks the status and keeps the user informed on the progress of the data collection. When collection is finished, the foreground routine disables the ADC and its interrupt, pulls the data from the buffer that was filled by he ISR, and presents it to the user.

Mark Borgerson

Reply to
Mark Borgerson

Our automotive engine controllers do have nested interrupts. As part of power-up initialization, all interrupt capable devices are initialized and interrupts enabled. For a few non-normal operational situations (e.g., when the car is sitting in a service bay and the technician wants to reprogram the ECM) we will disable interrupts from selected sources.

We do disable interrupts at various levels using the interrupt mask of the 68K, but never specific devices in normal operation. The disabling has to be done for very short periods of time. We can't afford to miss an interrupt--an engine wants attention no matter what the code is doing. One of our constant tasks has been to keep an eye on our customers code to ensure they do not disable interrupts for too long.

~Dave~

Reply to
Dave

Precisely, that is YOUR definition, but many of us have a different definition.

Ian

Reply to
Ian Bell

It depends on where the "main" task is run. At work we refer to the main task, which runs in a periodic timer ISR the foreground task, and the test code, which runs when the ISR does not run the background task. So IMO the code that defines the main task performed by the embedded device is the foreground task, and other code that gets executed because of interrupts, or because no interrupts are running is the background tasks.

Regards Anton Erasmus

Reply to
Anton Erasmus

To my mind it is all to do with priorities. The highest priority running task is the forground - which generally means an ISR, and whatever runs when this is not running is the background.

Ian

Reply to
Ian Bell

To repeat myself, I don't see any connection between task priorities and foreground/background - in an embedded context.

And don't forget that the foreground (non-ISR) code can (usually) mask out (background) ISRs. To my mind, this means that the foreground (main code) is actually higher priority unless it explicitly decides to allow the background ISRs to have a look in. Priorities shmiorities ;).

I guess you can call it what you like, but your usage would be contrary to that of all the companies I've worked for/in during the last 3 decades.

Steve

formatting link

Reply to
Steve at fivetrees

This is a design decision which, by masking ISRs, makes the non-ISR routines a higher priority. To my mind this is a poor design decision in a real time system - it may be OK elsewhere.

Experience clearly differs since my usage has been commonplace in all the companies I've worked with/for in the same period.

Clearly there is no generally accepted definition.

Ian

Reply to
Ian Bell

This wouldn't be allowed in real time life critical embedded applications, the timing aspects of the code would simply be unverifiable.

Reply to
steve

Indeed which is why it is very difficult to use an RTOS (i.e.pre-emptive) in critical applications. Many (standards for critical apps) insist on nothing but polling, others will permit a time triggered cooperative scheduler to be used as there is only ever one interrupt.

Ian

Reply to
Ian Bell

Without using either of those two words (!), we do disable interrupts for (hopefully) short periods of time in non-ISR code in automotive controllers. It isn't because the code doing the disabling has a higher priority, but because the code needs to perform a multiple instruction sequence which needs to appear atomic. To the ISRs, it appears as though a (really!) long instruction is executing. Is this a bad design decision? I don't think so--there are some operations which simply must be performed without being interrupted. This is normal in my experience (yy years ;-) in the real-time, embedded world.

There is another class of operations which must be performed coherently and are handled in other ways. For instance, some ADCs must be performed within a certain time period for certain signals. Since some ADCs are performed synchronously with engine events, the conversions of a coherent group may be interrupted by the synchronous conversions. The ADC module used has provisions to make a group of conversions coherent so that, if interrupted on say the last of a group of three, when the ADC returns from the synchronous conversions, it starts over at the beginning of the interrupted coherent group.

~Dave~

Reply to
Dave

Most of my designs *are* real-time, critical (but not *formally* life-critical) embedded applications - i.e. one where failing to make the timing constraints would do damage to the company's reputation, but hopefully not kill someone. (They're mostly industrial process control & data acquisition apps.)

As Dave has said, one frequently does need to disable IRQs in order to make certain operations atomic. In such cases the IRQ will be delayed, not missed - and we usually have strict timings, usually dictated by the hardware, on how long we can mask interrupts for.

Also we distinguish between different interrupt priorities - certain things really can't be masked safely. Also certain IRQs may mask others. To put it another way, one can still achieve the level of determinism necessary by paying attention to IRQ priority - and not just by saying "thou shalt not mask interrupts".

Where timing aspects need to be set in stone, I generally try to make this a hardware (e.g. timer)-based system. (Example: I had a VCF-based ADC; a counter counted the VCF output within a strict window. This was achieved by gating the VCF output with a timer-derived window signal.)

Steve

formatting link

Reply to
Steve at fivetrees

Talking of which - in the case of nested interrupts, does the "ISR == foreground" view mean there are multiple foregrounds? Or is the highest-priority IRQ the foreground, and the rest a part of IRQs the background? Doesn't make sense to me - but it's nevertheless a sincere question.

Steve

formatting link

Reply to
Steve at fivetrees

No, but is needs to be done with great care.

Agreed.

Ian

Reply to
Ian Bell

I am aware of that. However, in critical systems it is preferable not to do so as it makes the system less deterministic.

I did not say that,, but in critical systems there ways of doing just that.

IAn

Reply to
Ian Bell

In life critical applications that I have worked on, only one interrupt is allowed (and it must be a synchronous interrupt), so the question of nested interrupts never comes up. Everything else is polled. These restrictions are necessary so that I can guarantee and prove (though test or analysis) that a specific function will absolutely run within a specific timeframe.

Reply to
steve

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.