Applications "buying" resources

My approach to these problems has evolved as my solutions have become more sophisticated. With the sort of processing power and resources that you can cram into a few cubic inches *today*, there are far more possibilities that can be explored.

I.e., if people think it a worthwhile use of resources to make menus that fade in/out, then *I* can consider it a worthwhile use of resources to dynamically control the resources that applications use!

The parallel to windowed display environments is a good one. I.e., you can chose to naively handle your "expose event" and repaint the *entire* window. Or, you can try to use some smarts to only repaint the newly-exposed portion. Or, you can let the window manager handle some or all of this with backing store.

Each approach has different levels of sophistication and cost. But, the cost of the *framework* is distributed across all subsequent application instances. I.e., once you *can* do these sorts of things, it becomes *easier* to do them (imagine running a windowed environment on a KSR-33!) Just because you don't have such an environment *now* doesn't mean you can't *develop* one (e.g., the "power" resource hasn't even been discussed, here, yet will be

*increasingly* important in future product designs as users expect more functionality in smaller sizes and longer battery lifes [sic])
Reply to
D Yuniskis
Loading thread data ...

Only one comment. I don't remember ever seeing a KSR-33. I have seen, used, and even repaired (I say very modestly, because whoever designed the darned thing most assuredly went right into a mental hospital afterwards) a KSR-35. Note the

-35. The only -33's I ever saw were of the ASR (with tape punch and reader) variety. Not that KSR-33's didn't exist. I wouldn't know. I just never saw one. :)

Still trying to imagine the windowed -33, though. I can still readily read ASCII punches off of punch tape, so I'm trying to think how it might be done using the tape punch, now. ;)

Jon

Reply to
Jon Kirwan

You're making my point! Voluntary multitasking *works* when a *fixed* developer (or group of developers) undertakes the entire system implementation. They are aware of the risks and *own* the design -- if they opt to bend the rules for a particular portion of the design, they are aware of the costs/benefits.

But, once you open the doors to potentially noncooperative developers, then the potential for co-executing applications goes out the window (Windows 3 was really just a glorified "swap one executing task for another"). If, on the other hand, windows 3 had mechanisms whereby it could (and WOULD) terminate offensive tasks/applications, then the experience would have been very different (i.e., people would be complaining to the offending application vendor that *their* product "didn't work" instead of complaining that "Windows locked up, was unresponsive, sluggish, etc.")

I am doing that by informing tasks when the system is in need of resources and *hoping* they will relinquish the resources that they don't *need* (what you need and what you *use* are often very different). The problem with my current implementation is that *other* developers can grin slyly and chose to ignore these "requests". The only recourse I currently have is the threat of unceremoniously killing the process. (this is something that windows 3 lacked -- it was up to the *user* to take this action and few users would/could do so in practice)

If, on the other hand, I can redefine the interface as one of "Your current resources 'cost' X units. You only have Y units available. You *will* shed X-Y units worth of resources or *I* will take them from you in an unpredictable way (i.e., you will crash; your customers will think you are a crappy product and will think the other *cooperative* applications are *so* much BETTER than you!)" I.e., there is no longer a voluntary aspect *except* the choice the kernel gives you to decide *which* resources you want to "sell off" (in avoidance of *taking* them from you indiscriminately)

I see *lots* of potential gain! This flexibility lets applications in a lightly loaded system achieve greater levels of performance than they would if they had been artificially constrained to the level of resource utilization required to coexist in a "fully loaded" system (i.e., every app running concurrently). And, still gives the user the ability to dynamically add new tasks without explicitly having to "restart" or "reconfigure" the previously running applications. The hardware tries to be maximally utilized at all times instead of innocently idled in preparation for an application that *might* be started a few milliseconds from now... or now... or now... (or NEVER!)

Reply to
D Yuniskis

KSR just didn't have the tape.

For the sheer perversity of my comment...

Imagine a curses application. Now, imagine a widowed environment layered on curses (I have deployed products like this -- very effective and economical -- though not GRAPHICAL!). Now, imagine that same environment running on a fast, though *dumb* TTY (e.g., a PTY). Finally, imagine it running on a HARD COPY device :-/

(I once wrote a file system driver layered atop a *block* interface to a 9 track tape drive. It was hilarious watching the transport "seek" each sector fetched from the "filesystem"! Perverse sense of humor...)

Reply to
D Yuniskis

Hi Robert,

[gack! l> [snip]

Well, it isn't really "economics" but, rather, a simple model to try to apply --one that might be easier to pitch to users...

Hmmm, this (and below) is a different approach than I had considered. Rather, I was driving it from the other end (i.e., the kernel telling you to shed resources you can't "afford" -- since *it* knows what your budget is and what you have currently "spent").

Yes -- but actually only when there is some task that needs something that it doesn't already have. E.g., if the free memory pool isn't empty and a task asks for a chunk of memory that the kernel can satisfy from this pool, then there is no need to involve the other tasks -- they can keep the resources they have.

Understood -- having played that game on eBay in years past.

Understood.

Yes. The problem is consequential to having three different resources to "manage" (bid on). And, can be complicated by the introduction of "memory currency" vs. "power currency" vs. "timeslice currency" (though a single currency means you have to arbitrarily decide the relative worth of power/time/space)

Yes. Since the tasks (bidders) bear those costs, it is in their best interests to minimize them unless it *isn't* in their best interest as complexity might have some value for particular tasks)

No disk (in this implementation). So, memory just gets discarded -- the application mainly has to keep track of how it *created* the stuff in that discarded chunk of memory so that it can recreate it when/*if* needed.

Sorry, clueless as to "Economic Theory" and terminology thereof. :> But, i think I understand your point.

Understood. One "solution" would be to let them submit multiple bids -- each as a tuple (I hadn't previously considered dealing with *all* resources concurrently; rather, thought each would be managed independantly and the kernel -- or its agent -- would keep (re)setting the price for each resource). I wonder if N bids would suffice for N resources? I.e., "what's your highest set of bids for resource A? B? C?" and then having the "winner" picked (Q: do you compare all A bids with all other A bids? Or, any one bid from task A with all *other* bids from other tasks, etc.)

I don't think there is a "simple scheme" when dealing with these sorts of resources. E.g., how do you manage a power budget on a portable device "simply" -- yet provide maximal *service* to the user?

Correct.

Understood. This is how I handle service providers -- deciding when to fork new threads in the server, etc. Startup costs for me to spawn "yet another thread" are pretty small -- allocate another stack, etc. (since it will be using the same code image that the other sibling threads are using). But, the algorithms become more "art" than "science" and tuning becomes a big part of the design and deployment. I'm looking for a solution, here, that "finds its own sweet spot" instead of forcing me (or "something") to *impose* one.

Yes -- and the sizes of those intervals become "magic numbers" that are hard to "justify" (on paper) though easily *explained*. I'm going to stew on the dutch auction concept and see how easily that would work. Maybe run some simulations of competing tasks...

Thanks!

Reply to
D Yuniskis

Oh, yes. I know what the designations meant. I just never saw one. By the time the -33 came out (after the -35), it seems everyone bought the tape unit to go with it. Never did see one without it.

People used to put a piece of paper tape into the dashpot and run the head all the way over to the opposite side and hit RETURN to slam and jam it. That was a pain, at times, to clear out.

Okay. I'm imagining this on a Diablo daisy wheel system with a split ink/erase tape dispenser, now. Sheesh.

I used to write stuff for 800, 1600 and 6250 tape drives. Let's say, 'tried' anyway. You just made me remember the gaps and all the difficulties not having rewritten blocks "walk" into the gap and make further reading... a problem.

Jon

Reply to
Jon Kirwan
[attributions elided]

I've seen a few of them at surplus equipment auctions, etc. (I still have an ASR-33). I suspect some may have been "de-A"-ed to become K's. If the tape punch or reader got munged, it was usually not something easily fixed (mechanical kludge).

I can recall writing programs to print "ticker tape ASCII". Mindless effort but always amusing to see some "message" come streaming off the PTP.

I.e., you *can* do it but wonder wonder why anyone *would*! Sort of like a rotary dial telephone that generates touchtones.

My approach only (pseudo-)reliably worked for R/O filesystems. I would build an image of a filesystem and then write it to the tape. Then, mount it R/O and watch the reels grind. Gotta wonder what sorts of currents were tossed around starting and stopping and reversing them as quickly as it did!

The "character" mode driver was the first one I had written (under NetBSD) so I did the block device as an "exercise". Watching the drive "thrash" is the same sort of "therapeutic relaxation" that one derives from watching a pen plotter (I guess "normal people" would watch tanks of fish???)

Reply to
D Yuniskis

If you're talking about memory, the user could select the number of bytes. Some applications already do that..e.g. my browser has a preferences menu where I can set the number of MB for a cache. I think this works better than a dimensionless unit that I have no idea what it means.

None of my desktop applications have ever shown me a message like that. I'm using virtual memory, and applications usually become too slow before the VM runs out, so I just kill them and restart them. Typically, such high memory consumption is caused by a memory leak in the application, in which case none of your fancy mechanisms would help.

Like others have said, try to come up with a number of realistic cases that cannot be solved in an easier way.

I don't see how such a "bursty" MP3 decoder would be any better than an MP3 decoder that keeps a constant 1 second buffer.

Just run it at a low priority, and it'll grab whatever time is left over.

No, I don't want to deal with that many details. A simple preference menu with a MB slider is good enough. If I get an application that doesn't behave, I'll uninstall it, and find a replacement. That strategy has worked fine so far.

Certainly. A crappy developer will simply ignore all requests from the kernel to reduce resource. A good developer won't need such requests, because the application will be behave nicely.

Reply to
Arlet Ottens

Not ACFAIK... ;-)

Actually it *is* economics, and that was, I think, my major point. Money and markets in the real world are fundamentally resource allocation mechanisms. And all the complexity and variations that go with that. Central planning, free markets, socialism, laissez-faire, capitalism, government intervention, all end up with analogs in a scheme like this. And many of the standard models clearly could be applied with little or no adaptation.

Of course I still think it's far too complex (and unpredictable) to use in most computer systems. But it would be an interesting thing to study and develop.

An externality is a cost (or benefit) that's not captured in the transaction, and is often borne by others, thus distorting the true price of the transaction, leading to incorrect economic decisions. For example, a factory discharging pollutants into a stream is not seeing the full costs of that pollution, but the people downstream from there will pay the price. Lets say that discharge ends up causing health and other issues that cost* directly and indirectly $10 million. Since the factory is not paying that $10 million, its cost to dispose of that waste is $10 million too low, and thus does not make the "correct" economic decision to not use that form of discharge unless the alternatives cost more than $10 million.

*Ignoring the morality of assigning dollar values to human suffering (and remembering that we do it all the time anyway)
Reply to
robertwessel2

And where do you specify how much CPU you get to use? And how much physical memory? And how much of the devices *power* budget is consumed by this application?

The "dimensionless unit" allows the user to rate the relative importance of individual applications. Obviously, application developers would have to provide a scale for them to use to put this in perspective. Note that this scale also gives them an idea of how "efficient" the application is.

Actually, my fancy mechanisms would help in exactly this case! When asked (told) to relinquish resources, your leaky application would throw up it's hands and say "I've released *everything*!". The kernel can then say, "you're a resource hog. I'm terminating you!" (instead of waiting for the user to *somehow* discover how many resources that task has "tied up").

[note in my system, all your resources are tracked by the OS -- as it must be able to kill you and reclaim those resources. So, the OS knows more about "you" than *you* do]

How do you define "easier" -- "add more hardware until you have safeguarded against every potentiality... regardless of how unlikely they might be in practice"? If you can get away with that with your marketing folks (and with your

*market*!), GREAT! You can write your applications in Java, run them on a 3GHz machine with a 500AHr lead acid car battery mounted on a motorized cart (to transport it).

:>

What if I want to skip forward 3 minutes? *You* have to read the next 2:59 of data from the MP3 file (which may be on the other end of a network connection) to *find* the 3 minute mark. If I've already decoded that 3 minutes of audio, I can produce it "instantly".

[imagine the same example but with a "VGA" VIDEO codec -- 10-20X the data requirements]

And why would you waste a a sizable fraction of a megabyte on an MP3 buffer? Why not ~12K (IIRC, that's the smallest amount that you need -- apologies if I have misremembered this... my brain is frozen from babysitting all of the citrus trees -- to be *guaranteed* to produce audio)? Or, why not

100MB so you can buffer the entire side of an album/live concert? What's so magical about 1 second? Are you *sure* you'll be able to get the next second's worth of audio when you need it? What if some other higher priority task comes along and/or the network is tied up moving someone else's data with higher QoS guarantees?

In my case, I can take advantage of whatever resources are "lying idle". With your approach, you have to define "at compile time" (?) what resources you will use and

*stick* with them (since you won't let the OS tell you when to reliquish them)

What if it isn't inherently a low priority task? I.e., if it has to take *the* top level lock on the filesystem in order to guarantee that the filesystem remains quiescent during its scan, then you surely don't want it happening "whenever there is some free time".

Exactly! If an application DOESN'T BEHAVE (according to the rules laid out and ENFORCED by your OS), you stop using it. This is exactly the "stick" that I am trying to implement in my resource sharing scheme. The thing that is missing from my "civilized" approach (i.e., expecting applications to cooperate).

The "currency" idea gives you that "slider" -- move it up and this task gets a greater "resource preference"; down and it gets less consideration. No need for you to even be concerned with the numerical details behind the slider's position!

No. You are still thinking of static implementations. You would constrain each application to use some small portion of the total resources available in the device JUST to make sure the other applications could *also* use their small allocations *when* they happen to be co-executing with the first application (or, you will give each application full reign of the hardware and only allow *one* application to run at a time).

Turn off your paging file. Wire down all the physical memory in your machine. *Now* show me what you can run conveniently in that environment. Then you'll see the assumptions your developers made with that software ("Gee, I expected to have a near infinite disk drive to use as secondary storage. What do you mean there's no swap??")

Reply to
D Yuniskis

Well, I never know if I've got "unknown siblings" out there...

Ah! Sorry, I don't have a "liberal arts" education so never had to take anything other than hard sciences/engineering courses. :< As a result, there are many fields that are "rocket science" in my book.

I think you need to give active agents some sort of "currency" to use in expressing themselves. In people, "money" is the thing that comes to mind. It's an easy (though overly simplistic) thing to measure -- 2X is worth twice as much as X.

We use the analogy all the time in casual conversation about NON-monetary issues. E.g., would you do _______ for $_____? What about twice that amount? Four times? I.e., we never expect to receive that amount yet use it in trying to express how firmly we believe something, how "corruptible" we are, etc. Or, my favorite, how much of a premium would you pay for a device that *won't* break? (this is different than a lifetime warranty -- which will *repair* that device when/if it breaks)

Ah, OK. Then there are hundreds of such examples that come to mind! :-/

But, in the scheme outlined, those costs *would* be accounted. I.e., execution time is "metered" as is memory usage, etc. So, while you might not know them a priori, you could put a number on them after the fact. And, conceivably, could use this figure in subsequent "resource negotiations".

I'll have to look at your auction idea in some detail. Clearly the scheme I have in place will only work *against* my applications -- allowing others to "bloat" at their expense.

I don't see any legitimate countermeasure to take. If I give *preference* to "signed" applications (i.e., those that are "certified" as "well-behaving") then foreign applications will complain that they are unfairly targeted for termination (e.g., if I adopt the policy of giving preference to "known cooperative" applications when deciding who to terminate in a resource shortage). If I set aside some portion of the systems resources for "signed" applications and leave the unsigned ones to compete in a different arena, then I risk wasting resources in one or both of those arenas.

I am pretty sure I need a mechanism that forces everyone to play by the same rules and lets users compare apples to apples.

Reply to
D Yuniskis

The application will use the minimum amount of CPU and power that it needs, given the amount of memory it has available. If I notice an application is sluggish, I may try some different memory settings and see if that helps.

The kernel can also detect an out-of-memory condition, and kill the biggest resource hogs, without any warning/request mechanisms. You may want to look at the Linux OOM killer for ideas.

No, "easier" in the sense that you come up with simpler software designs, like I suggested, such as a combination of a user preference tab, and kernel-only mechanisms, like virtual paged memory, and some level of trust that your apps are well behaved.

Well, you suggested a 10 second buffer. How is that going to help with skipping forward 3 minutes ? And pre-decoding a 3 minute MP3 file will cost a lot of time, memory and power. Maybe after listing to the first 5 seconds, I want to skip ahead another 3 minutes, and all that effort will be wasted. Usually when people are skipping, it's okay to provide a rough estimate on how much bytes you want to skip. Alternatively, use a audio container format with proper time synchronization.

My 1 second was just an example, after you suggested 10x the amount. If a smaller amount works, use a smaller amount.

There's no advantage in using a larger buffer for MP3 decoding than necessary to guarantee no missing samples.

Then wait until there is some free time, and run it at higher priority. Or better, design it such that it doesn't need the lock.

Why not simply let the user decide not to use the application anymore ? Maybe the application isn't well behaved, but the user may still want to use it, because it's the only one that solves a certain problem in certain cases. I wouldn't want the OS to kick it out, just because it slows down the system.

No, I won't turn off my paging file, or remove physical memory, and everything will work just fine. How's that ?

"premature generalization is the root of all evil"

Reply to
Arlet Ottens

You're just explaining what I shortened to "it will work": whereas a batch system will take as long as it takes, a real-time system must produce a result within certain time constraints. Those differ between a VoIP system and a deep space probe, but the principle is the same.

The problem is that you cannot control the "rarely". Of course it happens only rarely that your reactor overheats at the same time the mains voltage jitters, the room heater runs at maximum, and an intruder kicks against the surveillance camera, stressing the image stabilizer, but you'd still better design your reactor control software to handle that worst case.

Of course not. But if it's a good telephone company, their switches will either accept a connection (that is, they'll have enough computing power to run the codec, and enough wire to transport it), or refuse it (giving you a busy signal). At least the good ones don't start stuttering when the network is overloaded.

As a customer, I wouldn't accept my 20-page fax to be interrupted in the middle because the switch decided, "hey, I need more power, I downgrade this connection from 64 kbps to 12 kbps".

My software is partitioned into "real-time" and "batch" tasks. The real-time components are designed to fit on the CPU in worst-case conditions. For the MP3 player, decoding is a real-time task, but building up the media index is a batch task, which is done when it's done. Which means: it's done faster if the CPU has nothing else to do.

But *how* does it ask?

Scenario: task A uses "extra memory". Now, task B comes around, asking the kernel "hey, I'm important, I need memory". The kernel asks task A "hey, give me back some memory". How does it continue?

- how can task A clean up and call back the kernel, "here, I'm done"?

- does the kernel suspend task B until task A answers? Sounds like a priority inversion to me.

- how does the kernel decide that A is non-cooperative?

For Windows 3.0, it's simple: you have only one execution stream, and if the system sends you a WM_COMPACTING, you can clean up what you want, and take the time you want. In addition, there are several pieces of memory the kernel knows how to restore (by reloading them from their .exe file).

Stefan

Reply to
Stefan Reuther

Seriously: early DEC PDP computers used block-addressed tapes like that, called DECtape. More at

formatting link

--
Niklas Holsti
Tidorum Ltd
niklas holsti tidorum fi
       .      @       .
Reply to
Niklas Holsti

I know enough tasks which have periods where they don't handle expose events. Just use the standard Windows Explorer on a flaky network drive. I would assume during the same time they won't handle memory-release requests.

Let me check, we're in comp.arch.embedded. What kind of user is this, and what kind of applications does he run? :-) I'm just asking because most of the time, "our" products have a system designer who hopefully knows what he's doing.

The operating system research group at TU Dresden (where I studied until seven years ago) is building a real-time operating system based on an L4 micro-kernel. I don't know the current status, but their approach was to model the real-time properties of their hardware (harddisk, graphics card, etc.), and their applications. Thus, the application says "I need

1/4 screen space at 25 fps, 5 mbps +/- 1 mbps harddisk transfer, and 200 MIPS", and some admission control component decides whether that'll work or not. They didn't use worst-case numbers, but some mathematical trickery allowing them to model jitter.

Incidentally, L4's approach for virtual memory is similar to yours: tasks ask their pager for memory, which the pager gives them, but he has the right to take it away at any time *without asking*. This means, the task has to inform the pager *beforehand* what to do with that memory. The usual contract is, of course, "you can take it if you store it in a page file and give it back when I need it again". But you can implement other contracts. I don't know if anyone did that.

Stefan

Reply to
Stefan Reuther

Yes, though DECtape was a very different medium -- small reels, wider medium, etc. DEC also made byte (word) addressable disk drives. :-/

Half inch 9T tape *tended* to be used as character devices. I.e., the block device "feature" that I wrote wasn't useful for anything (besides the learning experience).

Actually, the guts of the character device driver (strategy routine) were the most interesting -- deciding whether to "read reverse" or "read forward", using the read-after-write facility present on the transport, etc. The toughest part of the project was finding documentation for the Pertec interface used on many of these older "drives" (formatter + transport) and determining what commands you could execute "in parallel" (e.g., "rewind" transport 1 while "skip space forward" transport 2, etc.)

Reply to
D Yuniskis
[attributions elided]

No. A real-time system *hopes* to produce a result -- of some time-valued function -- in some timeframe relative to a deadline. *But*, failing to meet that deadline *and* the subsequent "non-zero valuation period" just means the RT *task* missed it's deadline. That doesn't mean the "system" is broken. E.g., if you miss an unrecoverable network packet, then whatever it was used for can't (typically) be accomplished. But, that doesn't mean that your application can't continue. If you drop a packet in a video stream, the image shows some sort of visual artifact... but, can resume and continue afterwards. The application doesn't have to terminate or crash.

The idea that it absolutely *must* work is an over simplification.

Correct. That's why you have a real-time system with deadlines and task/resource priorities. All you have to do is guarantee that the *most* important tasks are "guaranteed" to be able to execute. E.g., you can probably shutdown the MP3 player -- or, let it miss LOTS of consecutive deadlines -- if your reactor goes critical. You won't get any of those guarantees from a desktop (non-RTOS) OS.

Our telcos are sized for 10-20% usage (I have no idea of cell capacity). And, have rather loose tolerances on how quickly they *do* react (e.g., 2 *seconds* for a dialtone).

Since everything runs on the same scheduler (though potentially with different scheduling algorithms), all of my tasks are processed as if real-time. Deadlines and priorities differentiate between "batch" and "real-time" tasks.

But the decoding can be transformed into a softer RT task if you can exploit "extra resources" that might be available. I.e., the "value" of missing the deadline remains nonzero for a lot longer -- because you have a deeper cache of decoded audio to draw upon.

The same way it handles other asynchronous requests from the kernel: an exception handler. The kernel can make upcalls to any task at any time. "How do you SIGHUP a task?" "How do you tell a task that it has a 'divide by zero' error?" How do you tell a task that you are unmapping a portion of its memory?" etc.

The exception handler processes the upcall from the kernel and does whatever is requested and "returns" the resources that it can dispose of. I.e., the "signaled" task, more often than not, is NOT "executing" when the request comes in -- the executing task is the one that is requesting/requiring the resource. So, the kernel needs to be able to "interrupt" the task and cause it to deal with the exception instead of whatever it was doing at the time.

Any time you request a service from the kernel, you have to be prepared for it to NOT succeed. Or, to take a *bounded* amount of time *to* succeed. So, the service call that task B has previously made just pends while this happens.

Remember, there isn't anyplace to *preserve* resources (no backing store). So, discarding resources -- especially if you design with this PROBABILITY in mind -- is usually very efficient.

Note that most of these resource *requests* happen when a new task is *spawned*. So, gathering up the required resources just looks like a slower startup. If a task needs to be able to start *quickly*, it arranges to start a proxy *before* it needs to be started.

In the current (original) scheme, if A hasn't given up the requested resources, the kernel considers it a candidate for "prejudicial action". Recall the kernel knows the environment in which the task was created and knows what *additional* resources it has requested since then. The task could have been (poorly) coded so that it now *can;t* release the memory that it has acquired (while I am talking about memory, note that the same arguments apply to other resources). In any case, if the kernel has not been able to get the resources that it *needs* (i.e. task B has been determined to be "more important" than this task and possibly others and, as such, *needs* the memory more than these other tasks do -- "because the reactor is scrambling"), then it doesn't care what the offending/noncompliant task's reasons are... it has to get the memory from

*somewhere* and this task just isn't important enough to be allowed to hold onto its resources!

I can do *some* of this -- and, in some cases, more. But, the details reside in the individual tasks -- the OS isn't omniscent. This is where you hope a task will cooperate -- because it is in the task's best interest to do so.

For example, if a task is currently "holding" results of a database query (*in* the database server's memory allocation), it can choose to "DROP" the result table and reissue the query later to recreate the "same" results.

Likewise, it can terminate itself (which releases everything) and restart later. E.g., if an application has "finished" but is "hanging around" in case the user wants to use it for something else (e.g., imagine MP3 player finishing playing song #1... if there are resources to spare, why not let it "hang around" in the expectation that the user will want it to play song #2? If the resources are, instead, needed by some other task, the MP3 player sitting idle can always decide to exit() belatedly)

Reply to
D Yuniskis

If you want to skip 3 minutes of MP3 you don't need to fully decode the MP3 data. It suffices to decode just the frame header (4 bytes) to be able to determine the start of the next frame. Based on the frame header you can also determine the amount of time it covers. Skipping 3 minutes of MP3 data isn't processor intensive at all, even on a low end processor can do it very quickly.

Reply to
Dombo

So your application will always run at that level of performance. Even if the processor is spending 98% of its time in an idle loop, etc. And, whatever memory is unused will just be "wasted money". From the user's standpoint at that instant, he

*overbought* his hardware.

Ah, so you think it is *OK* for the kernel to make these decisions...

I am not concerned with making the software easier. I am concerned with making the user experience *better* (adding value there).

Otherwise, *every* product on the market would run Linux or some other "free" software base and require a QWERTY keyboard to boot, configure, run, etc. What could be *simpler* (from the software developer's point of view??)

I believe in spending resources (processor power, memory capacity, programmer know-how) to improve the user experience, *not* to make it easier/cheaper for the developer/manufacturer/stockholder.

But you *can't* blindly trust third party apps! That was the point (well, at least a major point) of my problem with my

*current* implementation. There is a strong incentive for apps to *misbehave* as it makes their development efforts easier and makes a given quality of code "look better" (roughly speaking).

And, this behavior is *enabled* by the "well behaved" apps!

Sorry, that was a different example in an earlier post.

The time and memory are already sitting there OTHERWISE WASTED. That is the point of my scheme -- to let tasks acquire "unused" resources. This is only possible if you have a mechanism for

*relinquishing* those resources ON DEMAND.

So what? Would you have rathered the processor sit in an idle loop? That way you are guaranteed not to throw anything away (because you have nothing OF VALUE to throw away!)

If you are a chess playing automaton, WHILE WAITING FOR THE OPONENT TO MOVE, do you sit twiddling your thumbs? Or, do you use those "wasted resources" (CPU time) to "think ahead" and examine contingencies?

"OK, let me assume the player makes *this* move... then what would be my best *countermove*? What if he makes this *other* move, instead? Then what should I do? ..."

*When* the user finally makes his move, you *discard* all that work that you did *anticipating* his actual move [I've greatly simplified the way game playing algorithms work. The point is, they don't "do nothing" while waiting for their opponent's move! You *get* nothing with that strategy whereas if you do *something*, you stand a chance of *sometimes* improving your performance -- especially if you first examine the moves your opponent is *likely* to make (based on the analysis you conducted for the *last* move)]

"Usually" is not "always". You can't know where the ~150th frame "from here" begins just by "doing some math". You have to examine all of the headers between here and there. I.e., you need to have stored 3 minutes worth of MP3 "input" data as well (which is a consequence of having decoded

3 minutes of output). [BTW, just how big *is* your input buffer? Since you won't let it *grow* in the presence of spare resources... Do you have to *wait* until the user tells you "skip forward 3 minutes" before you *fetch* the MP3 data (so now you have a network delay as well... all because you didn't want to use the memory that you had sitting there, IDLE)]

Ah, kick the problem upstream to whomever is providing that audio stream! Yes, that will surely make the *software* easier! :>

And what are you *gaining* by doing that? "My product is SO much better than the competition because it has 93% of its memory idle whereas the competitor only has 92% idle"??

How do you "guarantee" no missing samples? If I am streaming data off a wireless connection, can you be *sure* (remember, we are talking about GUARANTEES) the connection will be "up"

3 minutes from now? What if there is some interference (someone turned on the microwave oven or walked out of range)? What if some HIGHER PRIORITY task is using that resource *or* the processor when you *hoped* to use it (e.g., "reactor scramble" example)?

I.e., *if* you have SPARE RESOURCES, why not get the data

*now* and, since you have the spare *time* and *memory*, why not get started on decoding it, as well? You KNOW you have spare resources, *now*. You don't know what you will have 5 seconds from now. By doing the work when you have "surplus capacity", you shift some of that surplus capacity into the future (e.g., 5 seconds from now, if some other task NEEDS the CPU, *your* task can safely be blocked/deferred without impacting overall performance)

You're changing the inherent aspects of a task ("chore") simply because you don't have the facility to let it be designed as it should. That's like disabling interrupts because you didn't want to design a system that supported mutual exclusion semaphores.

You can't guarantee it is quiescent if you don't take exclusive use of it. You can implement staggered locks so that you can check "successive parts" but the part you are checking needs to be under your exclusive control.

Great! Give it LOTS of "currency". It will *always* be able to buy the resources it needs. Wow, that was a simple fix! And, one the user can easily relate to!

"I'm sorry, this task was terminated because the system ran out of resources and BASED ON YOUR PREFERENCES IN EFFECT AT THE TIME, it was decided that it was the least important task to preserve. If you wish to change this behavior, go to and increase the 'preference' (relative to the other tasks that you typically use). For more information, see..."

Then you're not comparing apples to apples. I *don't* have a disk to swap to. I don't have unlimited mains power to draw on. And, I have a very tightly constrained consumer price target to meet. Tricks like this one let me offer more capability than the other constraints will otherwise tolerate. If your market willl tolerate higher prices, larger physical sizes and increased power budgets, *great*! I wish everyone had such a resource rich environment to design in! ;-)

Exactly! ASSUMING that YOUR design environment dictates

*MY* design constraints sure sounds like "generalization" to me! (but, that doesn't mean I think you're EVIL! :> )
Reply to
D Yuniskis

Yes. But, you need *all* of the frames between "here" and "there" to get, reliably, to a specific spot in the audio stream. (you can make an educated guess and hunt around for sync's and "looks like a header" but you can't be sure of exactly where you are in the time stream).

If you *have* decoded the next three minutes of audio, then you:

- *have* all of the frames between here and there

- *have* examined the string of headers along the way

- *and* have the actual "output" data, as well! (which, afterall, is what you are ultimately after!)

One can design an MP3 decoder that only needs ~12K (I think, worst case, you need less than 10 frames to be guaranteed to "have audio") for an input buffer and ~ processor can do it very quickly.

Reply to
D Yuniskis

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.