Resource revocation

No. You deal with the events as events. How you decide if the

*system* has failed is subject to other criteria -- unrelated to HARD or SOFT.

E.g., if you don't intercept an incoming missile before it reaches its target (hard deadline), *that* task has failed. It's not worth continuing to track and target the missile as it plows into the ground. Abandon that task. (you may have learned something from the effort that can be applied to tracking *otehr* missiles; or, you may not. In any case,

*that* missile is a done deal!)

Whether missing that HARD deadline has resulted in your *system* being considered a failure is a different issue. Should you throw your hands up in the air and let any additional missiles come through uncontested? What if that *one* had successfully targeted your defensive battery? etc.

I.e., how you evaluate the system is dictated by a different set of criteria.

So, when the first SCUD got through the Patriot defenses, they should have shut down the rest of the system and started bickering over refunds??

Real systems aren't binary like that. Real systems expect some number of HARD deadlines to be missed -- each with potentially different costs/consequences/lost opportunities.

No, it *does* mean the deadline was soft (if you consider "cost" as "negative value"). That is the nature of the distinction. HARD means you GIVE UP at 0.00000001ms after the deadline has passed. You missed it. THERE IS NO VALUE TO CONTINUING.

[But, the consequences of that missed deadline may range from NOTHING to THE END OF ALL LIFE AS WE KNOW IT. :> E.g., I have a deadline handler in my RTOS that is triggered when a task fails to meet its deadline. What the handler for a specific task might do can vary from incrementing a metric to invoking a scheduling optimizer that sheds some load to ensure future deadlines aren't missed]

By contrast, soft means you still have value to pursuing the goal. I.e., given infinite resources, you would give up on a missed HRT deadline but continue working on a missed SRT deadline.

Deciding when you might want to give up on the missed SRT deadline is a complicated issue, in most cases. Resources spent pursuing that goal detract from other goals being met. What value do you gain (over time) vs. the costs/risks you incur? When do you decide that the "expected value" of your gains is zero?

This is why SRT is "harder" than HRT. And, why SRT problems are most often CONVERTED to HRT problems -- because they make this decision making much easier! ("Once I am 12.3ms late on this task, I will abandon it -- AS IF it was an HRT task!") I.e., deliberately "failing", by choice (since that, presumably, allows the overall "value" of the system to be maximized)

You are confusing the system spec with the nature of the specific task that has the deadline. If your system has exactly one task to perform, exactly once, and that task has a hard deadline (i.e., once the deadline has passed, you may as well pull the plug and shut the equipment down), AND your SYSTEM SPEC is that you meet all hard deadlines, then, missing that one deadline means your system is crap.

If your system, instead, has to perform that task 100 times -- each time having an independent HARD deadline (i.e., a point in time beyond which there is no remaining value to working on that particular instance of that task) and the system spec says you must meet 50% of these, then once you have completed 50 of the tasks successfully, your *system* has met its specified requirement.

The system spec is not the same as the definition of the requirements for the individual tasks. (if there are 100 such tasks, what is *THE* deadline? I need *one* number since you can't have more than one deadline for *a* task!)

Again, damage is the wrong way of looking at it. The issue is the value of completing the task with respect to the deadline. How you map that into dollars and the remedies into similar dollars is a separate issue.

It boils down to whether or not you should even *consider* working on the problem after the deadline. If the answer is an unconditional "no" -- regardlesss of monies, costs, etc. -- then you have implicitly said that there is no value to a late answer.

OTOH, if you will consider continuing work on the task after the deadline, then you are admitting there is still some potential value to its completion and are willing to entertain a cost-benefit analysis to determine how much effort/resources you are willing to throw at the problem -- along with the possible repurcussions this may have on other tasks remaining in the system.

But you can tolerate failures! Or, *systems* can be specified to accept certain numbers and types of failures -- as per the criteria set out by the system's specifications.

The same sort of argument applies to SRT deadlines being missed. How much effort you expend trying to chase them down *after* they have passed is a judgement call that you evaluate in the context of the system specification.

No. It is still a hard deadline -- when the bottle hits the floor, the deadline has passed. It can be soft up to that point (as in my "move the picker arm" example).

If the arm can't move, then it's HRT. You approach it *as* an HRT problem. You don't "hope" you get 90% of them -- if you run the experiment indefinitely.

The events are different from the system. You apply different criteria to the system's specification than to the specification for the individual tasks that comprise it.

Jensen's real-time.org should be required reading for anyone thinking about RT work. Unfortunately, much of it reads like a text book but that sort of clinical presentation has a lot of value in nailing down the edges of these issues!

Reply to
Don Y
Loading thread data ...

Looking at literature on the subject, HRT (like any other term outside of a formal mathematical paper) appears to be a somewhat amorphous concept. Typing "hard real time" into Google finds a bunch of books and articles written by authors who seem to know what they are talking about, and they give varying boundaries of what HRT is, e.g. sub-millisecond, missing a deadline results in disaster, missing a deadline is considered the same as a program bug, etc. I don't think those viewpoints can be outweighed by whatever we might post here from the Usenet peanut gallary. HRT textbooks describe a body of probems and techniques that look fairly similar from one book to the next. So I'd loosely go with the idea that HRT is the stuff that those textbooks are about.

One book gives the specific example of an aviation flight control system as HRT but an airline reservation system as SRT. Your viewpoint seems to be that the reservation system is really HRT (since it's useless to issue a reservation after the plane has taken off). That might have some philosophical validity, but from a programmer's perspective I'd have to say that is a pretty unorthodox take on the concept.

Reply to
Paul Rubin

As I said, I work with a different definition which considers Hard Real Time to be a system level definition. A Hard Real Time system is one that has a hard deadline that must be met ALWAYS. If I only need to met it 50% of the time, it isn't a Hard deadline, it is Soft. The difference being what type of design/analysis methods need to be used on the system. If I have 100 tasks, and one occasionally fails to met its deadline, and that deadline was hard, then the SYSTEM has failed, and doesn't meet its requirements. (This doesn't mean that you immediately take the system down, failed systems can still sometimes do useful work). If a system fails during qualification, then it will normally be brought back with orders to "Fix it". (It being the given unit if the problem is unique to it, the whole batch if you can't show why that one was different). If it fails after passing qualification, then that unit is normally returned to be fixed and if you can't show a specific error on that unit causing the problem, you pay fora fix for the whole product line.

I would say that the value of work being done after the deadline is a very POOR indicator of how you should treat this deadline. By your definition you can have a lot of hard deadlines that don't really matter if you make them or not, they may provide "value" to the system by some measure, but don't really matter to the system meeting it critical performance requirements.

Sometimes systems can be specified to accept certain levels of failures, but to me, if that level isn't virtually 0, then you don't have a real Hard Real Time system, but the system is really just a Soft Real Time system, as the difference between Hard and Soft is the acceptability of missing some deadlines.

Taking a quick look at that site, He defines Hard / Soft deadlines as you seem to be refering to as Hard / Soft Real Time deadline. I do see a definition of a Hard Real Time System as: "A system having only actions with hard deadlines, and a sequencing goal of always meeting all those hard deadlines (i.e., deterministic maximum sequencing optimality), is a hard real-time system", which seems to fit with my definition. He uses real-time as a modifier for Systems and Actions, not deadlines (those are Hard or Soft).

Reply to
Richard Damon

No. A Hard Real Time SYSTEM is one that must meet ALL of it's hard deadlines. Most non-trivial systems have more than one deadline/task.

Yes. But only if one (or more) of the HRT tasks missed its deadline. Calling a SYSTEM "hard" means very little. It just says, "I am brittle (and probably over-specified)"

By that definition, most systems are NOT "hard" because most systems allow hard deadlines to be missed.

(This is the mistake many RT practitioners make: treating everything as a hard deadline -- even things that aren't inherently hard. Then, throwing resources at the system to try to meet all of these unnecessary deadlines.)

This is why thinking about tasks (and systems) in terms of value functions is so much more sensible. It lets you (being the developer *or* a scheduling algorithm!) decide how to deploy resources based on the *value* of individual deadlines (instead of trying to assign arbitrary "priorities" to tasks) and hoping they get done in a timely fashion. And, juggling the priorities if they don't!

Exactly. A hard deadline causes the developer, design, scheduler, etc. to pay particular attention to some point in time and a workload associated with it. To dispatch resources to satisfy that deadline, potentially at the cost of other "more important" (in the grand scheme of things) tasks. And, to feel free to STOP working on towards that goal once the deadline has passed. "Forget about it".

By contrast, a design/developer/scheduler has to constantly keep juggling the "value" of SRT tasks to decide how best to deploy the limited resources available. But, in return, it keeps

*trying* to deploy resources (if it makes sense given other deadlines/values) on that task long after the deadline has passed.

E.g., using value/utility functions lets these agencies figure out the "optimal" way to proceed towards satisfying all of their goals (in the context of time). I.e., maximize the "value".

No. Missing a hard deadline can be acceptable (in a non HARD

*system*). It boils down to the value of working on the task AFTER the deadline has passed. Hard deadlines can essentially disappear once they have passed. Kill off anything associated with that task; its no longer needed. (This is what I typically have my deadline handler do -- kill all related processes and free all held resources once the deadline is gone.) You failed to intercept the incoming missile before it reached its target. That's unfortunate. But, move on to something *else*, now. Something where you can "make a difference"...

Your "system specification" typically describes what sort of tolerance you have for particular deadlines and types of deadlines being missed. And, perhaps, how likely that may be.

If this criteria is "all hard deadlines must be met", then the SYSTEM is HRT and *ALL* hard deadlines must be met -- even if they are trivial ones (like illuminate the power indicator within 2 seconds of the application of power). There's no "slack" (i.e., brittle)

This is why designing soft real-time systems is a much more difficult problem. Because there is no SIMPLE, cut and dry criteria where you can say "it's broke".

"I need a faster processor/more memory/etc. because I can't meet this hard deadline, otherwise." (i.e., you end up with overspecified hardware, usually -- since many apparently hard deadlines can be redefined in a soft sense with some careful thought)

"A hard real-time system is one whose sequencing timeliness factors (there also may be non-timeliness factors) are: * optimality is the binary case that meeting all hard deadlines is optimal and otherwise is suboptimal (in some system-, application-, or situation-specific way) * predictability of optimality is deterministic."

Further:

"It is relatively unusual for a computing system to be intrinsically hard real-time. Most non-trivial real-time systems have execution entities with a mixture of hard deadlines and softer time constraints, such as deadlines and the equivalent of time/utility functions (although not usually understood and expressed that explicitly), plus execution entities that have no time constraints (but are subject to non-timeliness sequencing factors). Hard real-time systems typically arise as follows:

...

or all the time constraints are artificially forced to be hard deadlines because the system, or its designers/implementers /users, can?t deal with any other kind of time constraints."

(i.e., SRT is harder than HRT) And, if you have a nontrivial RT, chances are, it is NOT an HRT *SYSTEM*! (think about it... everything must be met? Really?? Nothing can slip "just a little bit"?)

"Forcing all time constraints to be hard deadlines often limits the system?s flexibility and adaptability, while increasing the hardware resource requirements and lowering the hardware resource efficiency."

As I said, "brittle"; "inefficient".

And:

"It is clear that in the technical sense defined here (as opposed to popular misusage by practitioners), soft real-time systems are considerably more difficult to create than are hard real-time ones. Some of this disparity is the intrinsically greater complexity of soft real-time applications, systems, and execution environments. But some is only a transient artifact ? both theory and practice of real-time computing systems have historically focused primarily on hard real-time, and that is necessarily changing."

--don

Reply to
Don Y

Be careful about putting too much faith in books/authors. I have three books, here, where the title in each case begins: "Real-Time ...". Yet, none of them venture a formal definition for what "real-time" might be! Two of them don't even think in terms of HRT/SRT ("Real-time is the opposite of BATCH"!! or "Real-time is real-world")

One even claims "The subject of this book is an approach to the specification, design and construction of software for distributed real-time systems". Yet, goes on to say: "There is no universally accepted definition of what constitutes a real-time system". Really??? In 1994 (copyright of the text)???

I have no idea as to their reasoning. What do they consider the deadline to be? Three days prior to departure so the passenger can have his tickets mailed to him? The moment the aircraft takes off? What;s the *goal*? To get the passenger to his destination (even if it happens to be on the next flight -- or the flight thereafter)?

Think about why you would create a taxonomy. Why call something "real-time" unless it is different from NON-real-time? Why call something HRT unless it is different from SRT?

We do this because the techniques that you use to approach RT problems are different than the approaches you approach nonRT problems. And, the approaches used with HRT differ from SRT.

If it was simply a matter of frequency/speed/etc. then changing CPU clock frequency changes a given problem from one to the other. Yet, you can't compute the "last digit" of pi in a *timely* manner (i.e., before some fixed deadline) regardless of the processor speed available! (silly example).

Is submillisecond "hard" and submicrosecond REALLY HARD? What is the value of this term, then? Next year you'll yawn at REALLY HARD time and consider subpicosecond times REALLY REALLY HARD! :-/

We have scheduling algorithms like EDF, rate monotonic, etc. to specifically address the types of issues that are associated with RT problems. (If you don't have a deadline, what value would EDF have??) Likewise, we design RTOS's using deterministic algorithms *because* RT problems want predictability in the services that they rely on from the OS. If an algorithm scales poorly, then predictability suffers as the environment changes in the system.

--don

Reply to
Don Y

The author is admitting above to peddling an idiosyncratic concept that doesn't match the usage of working practitioners in the real world. Therefore I don't feel compelled to treat it as gospel.

That's just plain silly. There are all kinds of random timing variations in a conventional computer system, due to cache memory, rotating disk drives, input data distributions (think of algorithms like hash tables with bad worst-case complexity), background servics contending for system resources, etc. In an SRT system with some specifications on the inputs, you can estimate or measure the distribution of those timing variations, run some tests to make sure stuff appears to work as expected, and you're done. You don't have to worry about rare outliers causing timing misses, since by definition occasional misses are acceptable. HRT requires sharp bounds on every operation and (in practice) normally very short deadlines, which impose a lot of constraints on the implementation.

Reply to
Paul Rubin

With a pre-emptive priority based system, in practice there can only be HRT operations in the highest priority level (known kernel latencies and the sum of the worst case subtasks execution times). Trying to calculate any hard deadlines on the lower priority levels would require first calculating the higher priority loading and then adding own workload .

In a practical system, all HRT activities are executed at highest priority with well defined maximum execution times. The lower priority levels are then suitable for SRT and even lower for some bulk operations, assuming of course that the pre-emption latencies are low.

When designing real time systems, I usually first look what operations can be moved to a _lower_ priority or even execute in the NULL task, then split high priority operations to short transactions, which are even shorter at higher priority levels.

With such division of labour, the system usually run quite nicely as such and if HRT is needed, it is easy to just check that the tasks at highest priority meet the requirements.

Reply to
upsidedown

Huh. I did a little bit of looking at Go, but it seemed at a casual glance like it was pretty ill-suited to embedded work; huge runtime library that you can't strip down, garbage collectection at inopportune times, and a lack of cross-compilation targets.

Have you actually managed to use it? Did it behave?

--
Rob Gaddi, Highland Technology -- www.highlandtechnology.com 
Email address domain is currently out of order.  See above to fix.
Reply to
Rob Gaddi

We use Go where I work, though on servers, not embedded targets. I know there's a gcc front end for it, so I'd expect it can be cross compiled. I think it is potentially ok for 32-bit embedded systems, though maybe not for ultra low latency hardware operations because of the GC and multitasking. The runtime is much smaller than Erlang's. I'm not sure about the GC situation. Someone told me it could run in systems with

1-2MB of memory (maybe including program memory), though I haven't looked into this. I don't know how those issues compare with Limbo.

I'd certainly consider Go if I were doing something with embedded Linux. It might or might not be feasible for Cortex M4-sized targets. I don't know enough to definitely rule it out.

Reply to
Paul Rubin

Limbo has some of the same (mis?)givings. E.g., GC is entirely under the control of the VM. But, my hope is to just avoid the

*need* for asynchronous sweeps (e.g., avoid things that use memory that way).

Since it (the only available release that I know of runs only under Inferno so its hard to talk about one without the other) runs in a VM, anything that can host the VM can run the "binaries". E.g., there was an IE plugin that implemented the VM so you could run Limbo executables in your (IE) browser. (I don't think that has been maintained)

Most of the "library" is implemented as loadable "modules". So, if you aren't using the graphic library, it never gets loaded (i.e., "from secondary storage")

But, the developers think 1MB is a small machine :>

(I've been wading through the implementation trying to carefully weed out features that I don't like/want/need as well as partitioning it into a ROMmable core with RAMmable data segment -- since having 1MB of RAM into which the entire thing can be loaded raises the bar in terms of deployable hardware).

*Limbo* has fared well, so far. Tiny executables (i.e., I can dispatch the code for a task/job over the network in a small fraction of a second and have it loaded and running "instantly")

But, I've cheated, in a sense, by making so many things available as *services* so applications don't have to waste time/space reinventing the wheel everywhere...

(IIRC, the Inferno binaries -- and sources -- are available for "free" download. I just don't have a pointer handy...)

Reply to
Don Y

Only the *topmost* priority can be regarded as having any "right" to use the processor. I.e., you have to determine that this task *won't* be using the processor before you can figure out what resources the NEXT highest priority task (or task set) will consume. And the next; and the next; etc.

But the scheduling algorithm can be chosen to intelligently pick *which* tasks execute in order to maximize/guarantee they meet their deadlines.

Exactly. (Simple) priority based schedulers only produce optimum (or even correct!) schedules if there are surplus resources. I.e., underutilized hardware.

And, they lead to the "squeezed balloon" syndrome: something stops working (because it's priority isn't high enough given the current workload of higher priority tasks) so you goose its priority. Then, something ELSE stops working... Sort of like trimming the legs on a wobbly table: "oops! too short! Let me trim down the other legs to match... oops!..."

With a science/math based approach to scheduling theory, you can actually figure out *if* a task set can be scheduled instead of running the app monte carlo style and watching for failures.

Yes. Much like shrinking atomic regions (*the* highest priority activity) to their smallest practical form.

If you provide the application with information about individual task deadlines, then the scheduler can evaluate these "live" (avoiding the phrase "in real time") to best decide which task to run -- without assigning arbitrary priorities (to tasks which often conceptually *share* a priority level OR which have been incorrectly assigned priorities based on some concept of "importantness")

See, for example, rate monotonic and EDF scheduling algorithms (among others).

Reply to
Don Y

You might want to read his entire treatise, carefully (i.e., as if you were trying to LEARN it) and then examine his pedigree before dismissing his (well thought out) argument.

Sure! But many of these the RT developer implements deterministically instead of relying on some generic implementation NOT suited to RT.

You're leaving performance entirely to chance. Making no attempt to maximize it! "Oh, well... technically this task could be

6 hours late in meeting it's deadline... "

Yeah, but the approach you described is exactly how you tackle HRT! You figure out what its worst case performance characteristics are -- then specify hardware/resources that will PROVIDE that level of service: "The system NEEDS it!". Done.

You end up with more resources than you need because you have been

*forced* to expect everything to go wrong. (as an *informal* testament of this, look at the maximum utilization for the rate monotonic algorithm which GUARANTEES schedulabilty -- roughly 50% excess capacity "just in case")

With SRT, you (assuming your goal is to provide the best possible performance and not just "hey, it sorta works") have to evaluate your options at each step. All eligible tasks have to be considered regardless of the value of their utility functions AT THIS INSTANT IN TIME. You can't dismiss tasks (HRT) whose deadlines have passed because there still is tangible value in meeting those goals. You don't just pick any task and let it run KNOWING that it may make some other task late ("What the heck, that task's deadline is soft so it *can* be late -- even if a better scheduling choice could have caused it to NOT be late!")

Why not just schedule the remaining SRT tasks in ALPHABETICAL ORDER? What the heck! Some WILL probably end up late but they shouldn't care, right??

Think about it. It *should* be obvious!

Reply to
Don Y

Too simplistic.

I once implemented a lung ventilator. If the airway pressure was too high and I missed a 50ms deadline for reducing it, does that mean the the system was brittle and over specified? (I accept that the human part of the system would have been brittle under those circumstances, but as I haven't seen a spec for a human I can't comment on whether humans are over-specified :)

Ditto missing a deadline for having a breath!

Of course, "deadline" is peculiarly apt terminology in these circumstances :)

Reply to
Tom Gardner

You've misread my comment. Since you MISSED that HARD deadline (in a HARD *system*, not just a hard *task*), you obviously didn't specify *enough* resources in your system design to guarantee *meeting* the deadline. (even if those resources would seldom have been needed/used)

[Alternatively, the system you have defined does not meet the stated qualification of "HARD RT *system* -- it just happened to be *a* RT system that had a hard deadline that it happened to have missed. Presumably, the system specification gave you some leeway in how many of these deadlines you *could* miss and still be considered "functional". Perhaps "miss no more than one out of every ten and never two consecutively". Such a specification would be an acknowledgement that trying to meet every such deadline would be too costly to implement; i.e., contain far more resources than necessary probabilistically]

Cf: "A system having only actions with hard deadlines, and a sequencing goal of always meeting all those hard deadlines (i.e., deterministic maximum sequencing optimality), is a hard real-time system" and: "A hard real-time system is one whose sequencing timeliness factors (there also may be non-timeliness factors) are: * optimality is the binary case that meeting all hard deadlines is optimal and otherwise is suboptimal (in some system-, application-, or situation-specific way) * predictability of optimality is deterministic." I.e., if your system specification ALLOWED you to miss that one deadline, then it wasn't a hard real-time system!

That's why it's important to have a taxonomy and well-defined means of categorizing problems and implementations. So its more than just :this *seems* like it is more important than that" (OK, then that should be reflected by the system design and metrics that let others come to that same conclusion.)

Instead, we get folks conflating speed/frequency, safety, damage, monetary cost, etc. -- all things that have emotional appeal but no real scientific/mathematical basis (that could be fed into a scheduling algorithm, etc.)

Reply to
Don Y

Excuse me, but what is the problem with 50 % excess capacity, i.e. 66 % CPU utilization.

That is a pretty good figure, I would be quite happy with 50 % CPU utilization :-)

PDP-11/RSX-11 was a quite good RT platforms to 60-70 % CPU load and Windows/Linux up to 40-50 % CPU load.

I do not know if the rule of nines has been used in SRT discussions, but at least in telecommunication two nines imply 99 % reliability, three nines imply 99.9 % reliability and so on. In telecommunication, adding one niner requires transmitter power or antenna area and hence costs multiplied by 10.

In a correctly configured Windows/Linux systems, getting three or four niners reliability is not that hard at 10 ms.

Reply to
upsidedown

But there are other scheduling algorithms that approach 100%! If you are building something that you intend to sell, excess capacity is wasted capacity is lost profit (or, increased price).

(C.A.E has concerns other than getting something to work on a Linux desktop machine!)

Reply to
Don Y

Now THERE's a statement that's true when it's true and not when it's not. Let's say that for $0.25 a processor more I can shave two weeks of finagling with optimization. In a world with $1 Cortex M0 processors, that number is if anything high, it's probably closer to $0.10.

Assume two weeks of my time plus two weeks earlier to market is worth $4K. I can sell 16,000 pieces before I make up that two week difference.

For a lot of applications, preposterously cheap horsepower has outstripped our ability to use it all up. Failing to pay attention to that fact is like pretending you can still only get a forward beta of

  1. Times change, parts improve, engineers still need sleep every now and again.
--
Rob Gaddi, Highland Technology -- www.highlandtechnology.com 
Email address domain is currently out of order.  See above to fix.
Reply to
Rob Gaddi

You are thinking in small numbers! How many refrigerators do you think *one* manufacturer sells IN A YEAR (forget about next year)? How many irrigation controllers? Televisions? Dishwashers? etc.

I've worked in industries where the saying was "you're paying for the plastic" (that the device is encapsulated in).

You still find single-sided circuit boards in products. Multiple *cheap* processors where one "better" processor could suffice, etc. And, I'd wager a boat load of ASM where a HLL would be *so* much easier on those poor programmers...

I've never met a manager who wouldn't try to figure out a way to cram some extra functionality into a product *or* trim a tiny bit of resource out of it -- when dealing with volume products. Your first offering may be "wasteful", but you quickly refine it to cut the waste out (get your foot in the door to get market share, then figure out how to boost profit and/or give yourself more margin to compete with other vendors on price)

And, time spent *now* can be leveraged in future product designs. Save a penny now and you've also saved it tomorrow.

Why don't we see 32b processors in mice? Hell, think of how many cheaper programmers you can hire if you can code the application in Python and run it on a 100MHz CPU! Look at all the time those programmers can then spend sleeping! :>

Why isn't ethernet connectivity *more* common? Heck, you've got all those resources just sitting there! Add a PHY and a connector (or, a radio, etc.) When you're buying them by the millions, everything is *free*! The next generation of parts can have the PHY and connector *on* the silicon! :-/

Reply to
Don Y

I did look at it. That he's "arguing" something means by definition that his claims are not currently universally accepted. I'm not dismissing what he says, but just saying that he says one thing and other authors say things that are different, so there's a mix of valid viewpoints and I don't see a case for having one shut out the others. Any really precise formulation of a problem's requirements have to be part of the description of that specific problem.

Well yes, In some sense that's the idea of SRT, that performance is probabilistic and you're ok if you've got some (maybe informal) bounds on the probability distribution.

The idea is just to meet a specification. Are you confident (to whatever assurance level the product is designed to) that it's fast enough to meet the requirements? If yes, ship it. If not, go do some more work speeding it up, or upgrade the hardware, or whatever.

Figuring out the worst case is different than figuring out a probability distribution. Example: you have a 1024 byte (8192 bit) array, initialized to zero. You have a reliable hardware random number generator. You want to select exactly 1000 of the array's bits at random and set them to one, within a deadline. How would you do that in an SRT system? How would you do it in an HRT system?

Reply to
Paul Rubin

Only by this authors crazy, self defined definitions. From what I read of his work, he is an academic, and make the mistake of using the wrong tools for the job. His quote that "Hard Real Time is hard, Soft Real Time is harder", shows that he does NOT understand how to do it in real life. You can get this result if you try to prove the validity of a Soft Real Time operation to the level needed for a robust Hard Real Time operation. He also doesn't seem to understand the concept of real requirements (since he tolerates allowing a hard real time operation to fail, and talks about rescheduling thing when this happens). Hard requirements are HARD, they are MUST DOs, failure is NOT an option. Occasionally you are given a small allowance for failure to handle situations beyond the systems control. You likely have code for cases where the failure occurs, as a form of damage control, but if this executes (except for violation of the contract with that system, or conditions beyond it control) then the customer can rightfully say the system has failed it function and is defective.

Hard Real Time design requires an exhaustive worst case analysis. This is a lot of work, and requires a lot of testing to make a good attempt at forcing the worse case situations, and to make sure all cases have been analyzed and checked.

Soft Real Time requirements, on the other hand, don't require looking at "worse case" cases, but typically operating at some minimum level of average performance. We don't need to find the absolute worse case paths, but just the "slightly unlucky cases". Because they are based on averages, we can normally use the law of large numbers to analyze things. This allows us to simplify the analysis.

A system with Hard Real Time requirements needs to have a Hard Real Time analysis performed on it, which normally requires that the system was designed with Hard Real Time in mind, and thus is a Hard Real Time system. If you system has only been analyzed to the Soft Real Time level, then it is extremely hard, if not impossible, to do the analysis needed for a Hard Real Time operation within it. You basically can't make a guarantee that a non-trivial Real Time operation will met a non-trivial Hard deadline without a system design based on being able to make Hard Real Time guarantees.

I agree, that many things called hard real time are not, things that are not needed to meet critical objectives, things that we just like having happening real fast, things that if we don't get them done by the deadline made us waste the resources we put into it because it no longer has value. A Hard requirement is one essential to meeting the critical performance requirements of the system. A Hard Deadline is a deadline that the design says must be met to meet these requirements.

This doesn't mean that Hard Requirements don't exist. I will admit that some "Hard" Requirements are defined as Hard, not because they really need to be, but because it makes the analysis of the system above you easier, but unless you have real input into that system, you need to live with the requirements flowed down to you and specified in your contract. (Sometimes if you find something really impossible to met, you can renegotiate the requirements, but that is well beyond this discussion). Similarly, often you will take your Hard requirements and to implement it, assign Hard deadlines to sub tasks, so that the system is analyzable, as it is difficult to give them "Soft" deadlines and combine them to a Hard result.

I find individual deadlines rarely have individual value. If the deadline has been assigned as "Hard", then its failure has invalidated my design requirements to met by critical requirements, so the only important values are 1 and 0, and I better not hit any 0s. I never seem to have the option of doing one thing twice instead of two different things, as rarely are operations fungible as would be implied by a value function. Perhaps once you have met the "Hard", you can find some measures of value for how well you are doing above the critical requirements to meeting the optional and desired goals. I have never found trying to put "value" functions on operations to impact a scheduler making sense. You invariably spend more effort creating these functions (since there rarely is a natural value function), and too many resources evaluating it for the scheduler.

Priorities tend to lend themselves to simple schedulers (so less overhead, and simpler analysis), and normally tend to fall out of a requirement analysis. Sometimes the priority order will come out of the requirements. Other times the requirements don't directly force the order of priority, but some orders are easier to analyze (you like high priority operations to be predictable in system load, and generally quicker).

As I said, "value" is normally not an applicable property for a hard reuirement. And there can be NOTHING more "valuable" than a Hard deadline, as deadlines being Hard means it is requirement for a critical requirement, and I never plan to "stop" on a Hard operation unless I need to concede that I have failed and am switching to damage control, and I need to be able to prove that this shouldn't happen under the defined operating conditions.

You have OBVIOUSLY never worked on a system with TRUE Hard Requirements, or customers expecting that you deliver what you have promised. My customers tend to expect that I will met the critical performance requirements, often with penalties for not meeting them (not infrequently that we don't get paid anything for the work). Sometimes we do have an option to re-negotiate or get an exception for a MINOR miss in specification, but we still need to be able to make promises on worse case performance.

Note that you seem to want to call a lot of things as having Hard Deadline that aren't, because you are using the wrong definition.

I suppose that maybe the problem is that the site is talking about "computing systems", and not using "computing system" as part of a system doing something important (where failure means more than a bottle on the floor).

Sounds very much like: "When I use a word," Humpty Dumpty said in rather a scornful tone, "it means just what I choose it to mean -- neither more nor less." "The question is," said Alice, "whether you can make words mean so many different things."

Claiming that a word means something different than it common usage is normally a sign that someone isn't really concerned with communicating.

Reply to
Richard Damon

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.