Applications "buying" resources

And "RTOS" stands for ... Real-Time Operating System.

If he hadn't explicitly said real-time in the first sentence, then I would just have thought that the system was overly complex, and unlikely to provide gains commensurate with the effort. But since we are discussing real-time systems, I think it is simply wrong. On the other hand, it seems DY is viewing all "real time" as "soft real time", where being late is allowed. Then the solution becomes merely overly complicated rather than wrong.

I wasn't really wanting to argue - many of the posts in this thread are just too long, and it would take too much time to follow everything. I know that many of the people here, including yourself and D Yuniskis, are thoughtful and experienced developers, so it is unreasonable for me to write off your ideas so curtly. I just think that the system DY is describing is complex, and seems to be based on the idea that some tasks (or rather, their developers) won't follow the rules for cooperating within the system, and yet it depends on tasks that /do/ follow the rules to make the system "economy" work.

Reply to
David Brown
Loading thread data ...

I like the introspection that DY is going through. It isn't so much the conclusions, but the path with which I'm finding kinship.

Jon

Reply to
Jon Kirwan

It *sounds* arrogant but, once you embrace that ideology, you can come up with much cleaner and more robust systems. The "policies" can benefit from the services and protections that are explicitly provided *to* make user-land tasks more robust! (instead of complicating the kernel with still more layers which are inherently difficult to debug)

I'm sure I can get the "cost" down. The bigger problem was the second: how to involve the task in the decision making process WITHOUT "involving it" (i.e., having it run any code).

Correct. Same in my case. Likewise, if the page has disappeared, you can still *try* to access it -- but it will generate a fault which I can chose to handle in some predictable way (i.e., give you bogus data and signal an exception for you to handle -- so you

*know* not to use that data that you MAY have fetched)

If you have to tolerate the ability to lose *all* pages (including those "wired down" for the task, itself), then you have to be prepared to just kill the task. Note that some tasks could be good candidates for this! E.g., any mechanism needs to be able to apply "grouped value" to sets of pages -- remove one and you might just as well remove them *all* (for example, some of the task's TEXT).

In my cooperative scheme, a task that has been asked to relinquish resources can opt to say, "Sure, I'll just terminate myself! All that I am doing is blinking a light..."

I need to come up with an encoding scheme that lets me group arbitrary sets of "pages" into "biddable entities" (still thinking along Robert's Dutch auction line). Then, the tougher task is figuring out a way of representing conditional actions in a rigid structure: "if I lose *this* bid on this set of biddable entities, then my *next* bid would be..."

Reply to
D Yuniskis

It's called "probabilistic systems analysis" :> "What are the chances of THIS happening? And, what are the chances of my solution giving THAT result?"

The problem with the "real time" world is too many terms are "overloaded". E.g., "real time" vs. "real-time" vs. "Real Time".

I is much easier to think (and speak) in terms of "value functions" and "deadlines" (or their analogs). This makes it much clearer to all parties what the exact nature of the problem is and the approaches available to "solve" it.

If, for example, you have "hard deadlines" in the design, that

*actually* says, "get this done before the deadline OR DON'T BOTHER TO DO/FINISH IT". The presence of *soft* deadlines tells you, "Hmmm... if we can't get this done in time, how are we going to are we going to address what remains of the task/chore thereafter AND what consequences will that have on the remaining tasks in the system?" (note that you didn't have to worry about this aspect for the hard deadline tasks -- when the deadline passed, you could AND SHOULD simply forget about them.)

If you treat hard deadlines as MUST be met (else the system is considered broken/failed), then anything with asynchronous inputs is a likely candidate for "can't be solved" -- because you can't guarantee that another input won't come along before you have finished dealing with the first... "running out of REAL time". Clearly, that isn't the case in the real world so, either these aren't "hard" deadlines *or* they are being missed and the world isn't coming to an end! :>

Separate the *consequences* (in your mind) of the missed deadline from the processing of the task, itself. I've found this makes it a lot easier to describe what you *realistically* want the system to do "in the REAL world". E.g., just because grabbing that bottle off the conveyor belt *is* a HARD deadline, that doesn't mean that you should devote an exhorbitant amount of resources to RELIABLY catching every single one of them -- at the expense of, perhaps, OVERFILLING dozens of other bottles upstream of that!

This is where desktop/"best effort" environments differ. They treat everything as "the same".

No. A missed deadline is just a notification of a "fact". "Uh, General, Sir? The antiballistic missile failed to make it's course correction at the correct time. We're about to be nuked..." Clearly there is some value in knowing that a deadline was missed (or *abandoned*) regardless of whether it was a hard or soft deadline. Handling the missed deadline is just a means of formally recognizing the fact (you could decide to increment a counter, light a big red light, chuckle softly, etc.).

My point is, how do you *know* that you haven't missed any? You're relying on your a priori design ("on paper"). If you forgot to take something into consideration *then*, chances are you've wrongly convinced yourself that you have "solved" the problem (and will dutifully ignore all evidence to the contrary! :> )

What if he wants to put a 100MB video on that floppy? Some things just can't be done. Deal with it. :>

Even consumers are starting to get enough sophistication that they understand that a machine (tends to) "slows down" when doing multiple things at once. And, that in those stressed operating conditions, things might not perform as they would "otherwise". E.g., choppy audio or video.

But, they would not be very tolerant of a video player that simply shut down (because it decided it was *broken* since it missed a deadline).

Yeah, I think ostriches have a similar "defense mechanism". :>

Not sure how effective it is, though, if the problem still exists.

I try to arrange things to eliminate the possibility of errors, where possible.

This can be done with some SCHEDULING ALGORITHMS (and suitable criteria placed on the tasks). But, again, that requires you to be able to predict *reliably* the characteristics of your tasks, etc. Wonderful if *you* are driving those tasks (e.g., everything is related to some periodic interrupt that *you* have configured). But, if you are responding to the outside world, it's a lot harder to get *it* (the outside world) to adhere to your wishes!

"Thou shalt not release buggy code" :>

Why assume the "bug" lies in the application? If you are going to *tolerate* bugs, what if the bug lies in the kernel itself??

Granted, expecting the task to be well behaved is an assumption. Just like expecting a task in a cooperative multitasking environment to relinquish the processor frequently is an assumption.

These constraints/expectations don't make system designs impossible -- they just increase the amount of "due diligence" that the developer must exercise before pronouncing the system "fit". If you (i.e., *I*) can come up with a MECHANISM (again, avoiding POLICY) that is tolerant of programming errors, uncooperative or malicious tasks, etc. then you have a more robust environment to work in. "Violators will be shot" :>

There are costs associated with this, of course. Both in terms of resources and performance. I *like* pushing functionality into the kernel when it can buy me some piece of mind (e.g., not having to worry about another task stomping on *my* memory; or hogging the processor; or ...)

Reply to
D Yuniskis

Can I rephrase that for you?

"A preemptive environment needs a lot of discipline to IDENTIFY all the critical sections" (?)

E.g., usually, you have a mechanism to help you "solve" the problem. The trick is *finding* them all since you have to methodically think: "what happens if an interrupt happens INSIDE this statement and my task is swapped out. Can some other task come along and, coincidentally, alter something that I was in the process of using?"

There is some value to an explicit "yield()" -- it tells the reader "OK, this is a convenient place for some other task to run". E.g., if I see a large block of code *without* a yield() someplace in it, I am alerted to the fact that there is probably some relationship (that might not immediately be obvious) between the instructions represented in that code block that doesn't take kindly to being interrupted.

Of course, a lot depends on how expensive your task switch is. If it is expensive, then you want to minimize superfluous yield()'s. But, this comes at the cost of increased latency (for other tasks).

The point is, you can readily use a cooperative environment in a lot of applications. I surely wouldn't use a preemptive scheduler when designing a microwave oven controller (unless the microwave oven also acted as a television...). Why bear that cost when a little discipline can do the trick?

Reply to
D Yuniskis

Sure, the discipline is mostly about identification. Solving the critical sections has a cost too, and may introduce other problems like priority inversion.

True, every rule has its exception. It's probably because I never write stuff that has big blocks of code. Most of the stuff I write only runs for short bits and then needs to wait for something, so you get automatic calls to the scheduler. I'd still frown on large blocks of code that are peppered with yields(). It's too easy to add some extra code, and forget to update them. It's different if you have a small piece of code that takes a long time, like writing flash sectors in a loop. Having a yield() just before you write a sector wouldn't be so bad.

You can also run stuff in soft interrupts. Those are easier to track than tasks, and sometimes just as powerful. If your tasks end up like this:

while( 1 ) { wait_for_event(); do_things(); }

then you can replace them with a soft interrupt mechanism, get rid of a task stack, and possibly simplify the critical sections.

Reply to
Arlet Ottens

Yes. If the "wait" is a "system call" (however you want to define that), then it can embed a reschedule() (contrast that with spinning in a tight loop)

Or, *start* the write and then yield().

I have an executive that uses a structure like:

{ ...

do_stuff();

mark();

do_more_stuff();

if (something) yield();

keep_going();

mark();

... }

I.e., "yield()" relinquishes the processor but the task is resumed from the most recent "mark()". So, the task explicitly declares the points in its code that it "starts from" (mark doesn't implicitly yield)

At first blush, it looks very clumsy. But, it can be very effective (for small projects).

Reply to
D Yuniskis

Fair enough. He is certainly good at showing the thought processes, and is not afraid to change his mind as he works through the ideas - that is something at least some of us can learn from.

Reply to
David Brown

The art is making your mechanisms in a way that they are actually practically usable, not just from an Ivory Tower. This was the point of L4, to prove that microkernels can actually be used for efficient systems.

Same thing here: your idea sounded really cool to me, I just had doubts that the callback method can be implemented for a safe system.

A task must be tracking its memory usage anyway. "This page contains only free()d memory". "This page contains already-played audio". Now it would need an experiment to figure out whether that knowledge can be "exported" to an operating system memory manager somehow in a performant way (i.e. without needing an 'mprotect' syscall for every audio sample played).

Stefan

Reply to
Stefan Reuther

[...]

Of course this is the case in the real world, too.

User inputs have debouncing, so you can be sure the user will not hit that switch more than three times in a second. Networks have bitrates, so you can be sure that you don't get more than X frames per second. Audio has sample rates, so you can be sure to receive exactly 44100 /

48000 samples per second (and have to produce the same amount). Mass storage has seek and read times. Video has frame rates.

At least in the systems I work on. So I know precisely how many CPU cycles I may use to decode a MP3 frame.

Honestly? No. When I buy a hard-disk recorder which claims to be able to record two channels at once and let me watch a third, I expect it to work. That's what I pay them for. Plugging a TV receiver into my computer's USB port, run three instances of an MPEG codec, and hope for the best - that's what I can do myself.

I would accept if the recorder says, "hey, these channels have such a high bitrate that I cannot record two of them at once". But I would not accept if it "silently" damages the recording. At least not if it does that in a very noticable way. If it drops a single frame every three hours, I'll never notice.

That's just my point: design the system that this never happens. Sure this is harder than doing a desktop best-effort system.

That's probably similar things. For example, every UTF-8 related document says you should treat non-minimally encoded UTF-8 runes as an error. Now what should I do? Show a pop-up error message to the user? "Hey, your playlist file contains bad UTF-8!" 95% of them do not even know what UTF-8 is. So I ignore that problem. Which also simplifies a lot of other code because it can assume that I'll decode every 'char*' into a 'wchar_t*'.

[kernel asks task to free resources]

That's why kernels are usually written by much smaller (and better) teams than user-land code. Thus the kernel can isolate the buggy tasks from the proven error-free[tm] supervisor tasks, for example. Okay, it's annoying if the MPEG decoder crashes on that particular file, but the kernel should isolate that crash from the power management task, so the device can at least be turned off without needing a powercycle. In particular if powercycle means disassembling your car.

At least, that approach works quite well for "our" devices. Unfortunately, we cannot prove (in a mathematical sense) that our userland code is completely bug-free. I can construct a (far-fetched, unlikely) case that crashes my code, just because I simply have no idea how to reliably detect that. At least, my code crashes a magnitude less often than that of our favourite competitor :-)

Stefan

Reply to
Stefan Reuther

Yes. My first exposure to microkernels was through Mach. Many of the *ideas* made great sense. But, their implementation was too "kitchen sink"-ish. And, I think their attempt to chase a UN*X implementation as a "justification" for that architectural approach was a huge mistake. Had they, instead, said, "We're different" in much the same way UN*X "disowned" it's MULTICS, er, "roots" (bad choice of words), I think they would have been more successful in "proving something"

I'm sure it can be if a "select team" implements the system. The problem is trying to open that system up for every TD&H (Tom, Dick & Harry). :<

Yes. And, when you *expect* to have to forfeit those resources, you refocus *how* you keep track of what you are doing.

For example, keeping the control structures associated with particular data *with* that data (since holding onto the control structures after discarding the data doesn't buy you anything).

I think the notification aspects and "value ordering" of held resources can be accomplished -- the kernel could always peek into the task to grab data concerning these resources IF it knows where to find that data.

The bigger problem is giving the task a say in holding onto those resources in a flexible enough way that allows the task to determine it's own "resource pricing policy". If the task were to *know* that it has no further chance of reclaiming these resources AT THIS TIME, then the scheme by which it values them could be refined more.

If, however, it knows/thinks it may lose some/all of them, then it wants to be able to place conditional bids on keeping various subsets of them -- subsets that *it* defines. (e.g., I'm willing to pay 100 for these three pages; if that bid fails, I'll pay 100 for these *two* pages, forfeiting the third; if *that* fails, I'll pay

200 for this *one* page!)

I am hoping for an epiphany when my cumulative sleep deficit is in a bit better shape... :<

Reply to
D Yuniskis

Sure, but you don't know that he isn't going to hit "Full Speed Forward" and, a tenth of a second later, hit "Full Speed Reverse", etc. I.e., you can't (reliably) predict the future -- yet have to cope with it.

Correct. But, if that "appliance" can also make phone calls, control the spark plug firing sequence in your automobile

*and* receive/decode satellite radio broadcasts, would you be upset if that third video stream had visual artifacts resulting from "missed deadlines", etc.? *That's* the sort of devices I'm involved with. The user knows the device can't do *everything* (just like a user knows his PC can't run *every* application CONCURRENTLY that it has loaded onto it). So, if given a means of expressing "preferences" ("values") for those activities/applications, the device itself could take measures to satisfy those preferences (instead of forcing the user to respond to an "insufficient resources" message and decide which things to *kill* (since he can't tell them to "shed resources" :> ).

See above. (In such an environment) you *eventually* come to a situation where a user is asking more of you (device) than you can do with the fixed resources in your "box". If you *must* always be able to do everything, you end up with more in the box than you need -- or, lots of dedicated "little boxes".

If, instead, you allow the user to trade performance and preferences, you can do more with less (money, space, power, MIPS, etc.)

Yes. In my case, often even heavier handed (e.g., my calculator discussion restricting the character set to USASCII).

Or, little things like using unsigned data types for "counts" (so the problem of dealing with negative values simply doesn't exist)

Yes, but that is no guarantee that there are no bugs. It just shifts the probabilities around.

The problem I am usually faced with is very long up-times, limited/constrained user interfaces (a user might not even be "present") and, often, significant "costs" associated with failures (financial or safety).

I enjoy spending resources (MHz, memory, complexity, etc.) to improve these aspects of a product's design instead of "cosmetic crap".

supper. Another bowl of pasta *really* would go down quite nicely! Though I suspect I should probably have something a bit more "substantial"... :<

Reply to
D Yuniskis

But, for that given example, it's easy, because I'm allowed certain reaction times :-)

The "keyboard driver" must react upon user input immediately. It must recognize the "Forward" request and the "Reverse" request to make sure nothing gets lost. I can just periodically check the user's last will, at places I'm ready to process it.

If it's noticeable, yes! Of course I get annoyed if audio gets distorted when I'm driving at 4000 rpm (when spark plug control has much work to do).

People who follow my company's press releases know that we make car stereos / satnav. And the people who drive these cars do not know what computationally-intensive processes happen in there.

Okay, people who are into computer graphics may understand that the digital map frame rate drops in the center of Paris with thousands of little streets compared to some Australian outbacks with the next village after 500 miles. But even they - let alone Joe Sixpack - will not understand that the frame rate depends on the radio channel they're listening to. Digital radio is much more computationally intensive than analog FM, plus it depends heavily upon the codec and configuration in use by the transmitter, which the user doesn't even see.

Well, it was hard to make this work, but we did it.

You still have the option to know this beforehand and reject it. I prefer this a lot over "trying, hoping for the best, and cleaning up the mess if it didn't work" aka handling missed deadlines.

If I know I cannot do X and Y simultaneously, I decide which of them is more important, and then *deterministically* suspend one of them. However, this happens rarely enough that I can't come up with a real- time example (we've a few instances of this happening in batch tasks which would miss their "soft" deadlines otherwise).

Stefan

Reply to
Stefan Reuther

Then you are essentially removing features/capabilities from your product just to avoid the POSSIBILITY of having to deal with this at run time. Even if the circumstances never actually materialize!

Visit a medical office and see what the lack of integration results in. Do you think a company that designs EKG's can't

*also* design a pulse oximiter, infrared thermometer, digital sphygmomanometer, heparin pump, etc.? So, why have so many dedicated boxes -- each with their own screen and "user interface conventions"? (this is slowly changing as that industry realizes they can't afford the duplication of hardware, maintenance costs, etc.)

E.g., one would think someone shelling out $1,000,000 for a tablet press could *surely* afford an extra $10,000 for an ejection force monitor -- yet, you find that they *don't*! OTOH, if you offer that feature as one of a suite of features (NOT ALL OF WHICH CAN WORK AT ALL TIMES IN ALL CONDITIONS) and charge ~$1,000 for it, suddenly you have a competitive advantage: "Sure, we'll take it!"

You are assuming you can predict everything that can happen and address all of those things. Sure, you can say, "well, if this happens we need X time to recover..." but that's just a CYA way of saying "we won't deal with conditions where we have to react quicker (it's the customer's problem).

Handling missed deadlines doesn't have to be expensive. E.g, the tablet press example I mentioned (another reply) can handle the worst case missed deadline (e.g., a "bad" tablet being erroneously accepted) by shutting down the tablet press and lighting a big red light. If it misses a less important event (e.g., an ejection force profile), it simply "returns no data" for that event.

You're avoiding the issue (i.e., not even *knowing* if you have missed a deadline) by claiming that you handle "all cases, 100% of the time". I.e., why *detect* something if you can't handle it?

Which is exactly what *I* do. But, only after I *know* I can't handle both of them (because the LEAST IMPORTANT ONE ends up missing its deadline). You can watch how "it" is working and tailor your approach/algorithm to what your current operating conditions are.

For example, I have a tiny audio client (NIC, CPU, stereo amp) with fixed (minimal) resources. It has some signal processing abilities that consume resources. If the current network (server) conditions deteriorate to a point where the client can't reliably produce audio with the existing buffer sizes, it has three options:

- get the server to transcode the audio to a lower bit rate (but, I am at its mercy so I can't count on this being a viable option in any particular situation)

- get the server to switch to a different codec (this is expensive as it can require replacing the code in the client "on-the-fly"; and, the server may not want to comply)

- shed capabilities (e.g., some of the signal processing though this affects the ongoing quality of the audio experience -- different aspects have different costs)

- drop frames (least desirable) Sure, I can avoid all of this "work" -- I can either increase the resources in the client *or* change the specification of the device (i.e., make the problem go away by just claiming it is beyond the scope of the device)

When I do a new design, the first thing I do is research the application. Often, that means talking to users. *Usually*, it means disregarding what they *say* (in favor of determining what they actually *mean*). It is helpful to pose value questions to the user: "What if..." and "What if it *can't*...". My favorite scenario (which *ALWAYS* comes up) is the "I don't care" response. My stock reply is to focus on the notes I am taking and just audibly say something like "... shut down and catch fire". :> It's amazing how quickly they can rephrase that "I don't care"!

Then, the application is factored into a reasonably fine set of "chores" (avoiding the use of the word "tasks"). The temporal requirements of each are identified -- do they have hard deadlines or soft ones. Most "chores" have *some* temporal aspects even if they aren't what you would traditionally think of as RT (but they tend to have very *soft* ones).

Then, the *consequences* of missed deadlines are considered -- what is it worth to the application/user to meet this deadline (and what do you lose if you meet it "late"). It's only at this point that you can begin to address real hardware/software/requirements tradeoffs.

The most important chores get addressed first -- regardless of their temporal requirements. Then, lesser important chores get added into the mix until you have fleshed out all of the wish list.

You can map "importances" (avoiding the term "prioritites") to each of these chores. This lets you apportion resources and gives you an idea of what the maximal capabilities of your system will be. E.g., "I can keep the ignition timing dead to nuts, ensure the ABS is always available, run the emissions controls and any three of the following..."

If your marketing folks tell you "that's not acceptable, it must do...", you can now counter with "*that* will cost you..."

I had one group *insist* that they needed a certain feature in a product design. That feature would complicate the design and add considerably to the product's cost. I was able to tell them (from *their* sales records) that only *one* of their customers had ever asked for that particular optional configuration (at which point, top management reminisced that the particular option had probably never been *used* by that customer!).

[i.e., you have to know what to *ignore* from your users. Salesmen always *want* everything imaginable -- and don't want it to COST anything! Forcing them to put sales projections on particular configurations so that pricing related to development costs is the easiest way to get them to rethink their "demands"]

Have your satellite radio *also* control the ignition timing of the vehicle. Then, post your results :>

Reply to
D Yuniskis

Exactly. And if you express it this way, why not. I call it "better safe than sorry".

I know that I have to produce audio samples at 44.1 kHz rate. I have designed my system this way. The hardware can still handle that I not produce them fast enough, because I configured my hardware transmitter to send silence in this case. This catches the case that I happened to make a mistake in the design (which I do not make alone, and do not implement alone, and cannot formally prove in any case).

But then you have a non-realtime component in the data path, namely the network, and reacting on that is of course necessary.

Or do you measure "oops, this DecodeMPEGFrame took too long, this seems to be a complicated MPEG file, let's ask if they have this in cheap ADPCM, too?".

Of course my audio also starts stuttering if the CD drive doesn't give me enough audio data in time. But the system is designed to have enough CPU power under any circumstances, and have enough memory to compensate "typical" CD problems, so I don't have to ask the GUI people "hey, drop your frame rate a bit, I need more power to decode this file".

Stefan

Reply to
Stefan Reuther

Yes -- maybe a little too much anthropomorphism.

My understanding is that you want an intelligent tradeoff. Relating them to a common single parameter is the technique. This is done within the context of satisfying fixed constraints. It is similar to an economic system where money is the common metric within physical, legal, and chosen ethical constraints.

That's the easy case! Release all of it ;-) or all of it within the fixed system constraints.

The primary task is to choose the best metric. I choose elapsed time, but that might not be the primary one for any particular system. For each task you need to establish an approximate correlation to the metric. It doesn't have to be perfect.

That's a separate problem which faces any task needing to shed resources given an overall optimization technique.

And that could be the best system response. The goal is fairness among tasks but overall system performance, which may be best served in demanding phases by shutting down certain functions.

What tasks "want" is to satisfy their constraints and optimize certain parameters, such as update rate. Again it comes down to mapping separate metrics (display refresh rate, for example) onto an overall system quality. Some analysis is required and perhaps some configuration for particular applications.

Look at an economic analogy: if release is delayed two months it will cost us $500 k in sales. If we release on time, we estimate an additional $200 k in support and $200 k in extra engineering time. This is contrived, but a similar type of problem trading off disparate choices with a common metric.

--
Thad
Reply to
Thad Smith

Hi Thad,

I'm still try> >>

Understood. What I am trying to do is figure out how this "currency" would work.

E.g., the only way I can visualize a scheme where "time" can be the currency is if a task makes bids like "T time for M memory" (again, I am only dealing with memory in these examples, so far).

So, an MP3 player might bid "20 ms for 1 unit" while another application that needs 10 units at a time (to actually *do* anything useful) could bid "35 ms for 10 units". In this scenario, the MP3 player wins as the other application is effectively bidding 3.5ms/unit.

However, this other task would never bid on just one unit as it can't *do* anything with just one unit.

Similarly, the MP3 player might never bid on 10 units. Or, if "forced" to do so, it would bid something disproportionate since it isn't "worth much" to it to have all that extra memory.

(that's the only way I can see time being a "negotiable" quantity)

The point I was making was how to express the "bidding".

I guess the first step is to decide how the "pricing" process works. I.e., does the kernel set a price and have tasks say how much they would be willing to buy *at* that price? Or, do the tasks make bids for what they would like and the kernel arbitrates between them... (the reference framework has a big impact on the semantics)

Time makes sense from an engineering perspective. But, i am not sure it makes sense from the user's point of view. E.g., it requires the user to understand more about the nature of the various tasks.

Yes. I'm just thinking aloud...

I assume you mean "is NOT fairness" (?)

What I would like is a "currency" that implicitly constrains tasks based on the current value of that currency (wrt the resources it is buying). So, if a task can't find a way to optimize itself in a given "commodity market", it can just drop out of the market (i.e. exit()). So, this activity can be handled automatically without having to prompt the user.

E.g., if gasoline is $4/gallon, people "self select" whether they will be traveling over a given holiday -- or, alter their destinations to fit within their "resource budget"

Understood. I'm just trying to see how to express it in a way that makes sense to a user. E.g., what if another release (task) has corresponding figures of $100K sales/40K support/40K engineering and yet another has 1M sales/400K support/400K engineering...? And, change the "two months" to "3 weeks" in one case vs.

1 year in another (i.e., it is hard to look at the numbers *intuitively* and figure out where the dollars are best spent)
Reply to
D Yuniskis

The problem is you can only solve problems that can be

100% specified at design time. I.e., you'll never come up with an iPhone (e.g.) or other "expandable" device.

What do you do if you miss an audio packet for your cell phone? Do you even *know* that you missed it?

Why is the network *not* a real-time component? In my case, I control the entire "system" so the traffic on the network is of my design, the protocol stacks have been designed with deterministic behavior, etc.

But, like the other components, it is explicitly designed to deal with "overload" because it knows that the other components using it have mechanisms to cope with this.

If, OTOH, the "server" happened to "notice" that packets were not getting out onto the wire "before the deadline" and simply

*stopped* working, then I will have designed a brittle system.

I look at the actual timeliness of each "result" in the system and adjust the system's resources, dynamically, to maximize the "value" of the functionality that it provides. E.g., if that means shutting down or altering a "desirable" feature in order to continue providing a "necessary" feature, so be it.

Reply to
D Yuniskis

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.