Progress indicators

Hi,

I've asked this question many times over the years (in many places) and still haven't come up with a satisfactory answer.

What's the "best" way to convey to a user the "progress" made on completing a task?

It seems like most indicators convey "work done" -- where work is usually defined as bytes moved, etc.

Or, they are calibrated in completely bogus, nonlinear units (e.g., progress indicators that chug along merrily at a seemingly constant rate, then *jump* ahead... then

*stall* -- even though the "process" is obviously continuing at the same "speed" as before, etc.).

Does it make sense to use different means for different tasks?

Or, do users tend to think of tasks in terms of the amount of *time* required ("Who cares if X bytes have been transfered and that represents 12.673% of the total... how much of this is *finished* vs. how much REMAINS??")

This isn't as trivial as it may seem on the surface.

The first real issue is "what do users EXPECT from progress indicators"?

- to see that the system is "still working" (on their task)

- to gauge how much has been accomplished?

- to see how much remains?

- to see how much *time* has expired?

- to see how much time (likely) remains? etc.

Once that is determined, I think it is a lot easier to sort out *how* to convey this to the user -- especially in those cases where the quality of the estimate varies over time.

I suspect we (?) would all agree that existing implementations usually leave a lot to be desired...

Thx,

--don

Reply to
D Yuniskis
Loading thread data ...

There has been a bit of academic research on that topic. This paper is a good start:

formatting link

Cheers, Nils

Reply to
Nils

No progress indicator ever devised has been as aesthetically pleasing, functionally perfect and culturally significant as the original Microsoft-app Macintosh installer from the late 1980s featuring the dinosaur eating a man. "When the dinosaur finishes eating the man, the program installation will be complete!"

Reply to
larwe

That may very well be so because there _is_ no such thing as a satisfactory answer to that question.

You'll know the answer to that once you've defined "progress" in a usable manner, i.e. one that is quantitative, objective and measurable.

That's actually far from "obvious". Their makers may just have a concept of "progress" that you didn't grasp yet. Or there may be variation in how long individual work packets actually take, depending on:

parts that were configured to be skipped will jump the progress meter, parts needing unexpectedly slow resources will make it crawl, waiting for answers from far away can stall progress virtually forever

That would be the job of an activity indicator (rotating hourglass or whatever), not a progress indicator.

Usually only the ratio of accomplished/total is of interest. The exceptions from that rule are:

  • show accomplished steps if each step involves interaction with the user (think: old-time program installation off a 20-high stack of floppy disks)
  • show total or remaining steps if the total can move around (new work being discovered, or steps found that can be skipped, as the process proceeds).

No.

Whenever that time is above the get-a-cup-of-coffee barrier, absolutely.

Reply to
Hans-Bernhard Bröker

Thanks, Nils, this was an interesting read -- worth adding to my literature collection.

But, it seemed like it (the "experiment"/study) was trying to come up with a way to "least annoy" the user. I'm trying to figure out how to *best INFORM* the user.

I've prototyped different strategies and can't say that I find any of them appealing. The most interesting is the traditional progress bar but continuously adjusted to truly reflect the percentage of (estimated) work that is complete. The visual characteristics, however, are highly disturbing (especially to Joe Average User) despite being the most informative.

This leads me to believe you have to abandon a graphic/relative presentation if you don't want to annoy/alarm the user. (else you end up with the same crappy behavior that current progress indicators exhibit).

Reply to
D Yuniskis

I disagree. At least in some instances, there are *perfectly* satisfactory approaches. E.g., copying a file between two local media can be "monitored" in a way that most users would tolerate even in those cases where the processor load varies substantially during the activity. I think such tasks are characterized by an easy measure of the effort that will be required to complete the task (e.g., bytes moved), an easily updated estimate of the rate at which this activity is currently progressing (e.g., as a function of CPU load average and/or disk activity) *AND* the implicit knowledge that the expended effort is a monotonically increasing fraction of the total effort required. (contrast this with copying the contents of a > completing a task?

*I* consider progress to be a measure of the time spent vs. the total time required. When it comes to software, I think that is the only thing that users perceive. I.e., its not like shoveling snow off your driveway where you can see how much snow has been removed and decide to "call it quits" prematurely (i.e., "That's a large enough portion for me to get my car out; I'll do the rest later")

It is obvious if you can perceive that the "computer" (using the term loosely as this is c.a.e) is still "working"... yet "progress" doesn't SEEM to be happening. E.g., a user watching a progress indicator that reflects the progress in sorting a VERY LARGE list... the work required may vary dynamically depending on where the algorithm is in the list (and the nature of the algorithm as well as how it is instrumented). So, it might generate the first two entries quickly -- yet have to do more work to generate the *next* two entries, etc.

If you are *skipping* something, then you should have known that and reflected that cost saving in your initial task estimation. Just because that might be "hard" to do, doesn't mean you shouldn't do it. (I tend to err in favor of the user in my designs)

But, you can update your estimate at that time to reflect this. My question then addresses: how do you convey to the user this new estimate and your progress thus far *relative* to this new estimate?

E.g., if you are copying bytes over an FTP connection, you

*know* (in most cases) how many bytes need to be copied. If you elect to express progress as a graph of "bytes copied" vs. "total bytes", then your progress indicator is just "programmer friendly" and not *user* friendly. I.e., the user doesn't care if you have transfered 894323 out of 896075 bytes. He wants to know when you will be *finished* (since, in most cases, having 99.97345% of the file is effectively the same as having 0% of the file).

This is the nature of my question.

Yes, so how do you convey that to the user? Just stall your progress bar at X%? That doesn't tell the user anything other than (at most) "progress has stalled".

The progress indicator can serve the same purpose. Too often "activity indicators" are just there to entertain the user and don't actually reflect what is happeneing in the code. E.g., set cursor to "hourglass", do the task, set cursor back to "pointer".

If the animation of the cursor (in your example) *is* tied to the codes progress, you have to consider how that animation can be affected by other things happening in the machine so that it is not updated properly (i.e., if it isn't being updated properly, then it isn't reflecting the actual progress).

I claim that all the user really cares about it an estimate (which need not be in absolute terms) of the time remaining. If you restrict the display to something as one-dimensional as a typical "progress bar", then the "completed" portion of the bar should reflect the time the user has *invested* and the "remaining" portion should reflect the estimate of the time left.

(I see no other way to convey all of this information in a single metric -- we can discuss multiple indicators, too...)

This leads to very counterintuitive displays. :<

Reply to
D Yuniskis

Too much here, too much to share/exchange to find common ground, and too much I'm not entirely sure of anyway. Instead, your comments 'rang a small bell' in mind and I'll try and provide a small part of that reverberation to see if it stimulates anything.

You talked about 'indicators' and that reminded me of "tasks" on some gantt chart and my own learning process (always ongoing, of course) with respect to communicating with clients about projects. But the real issue is more than mere communication and I wanted to highlight that fact. It's also about controlling yourself.

Unless a project is something I've done completely before, or sufficiently similar to something I've done completely before, there will be potentially significant elements of uncertainty in assigning time. It is, in fact, these new facets with any good project that makes it interesting to do. If I didn't learn anything doing a project, then it's probably less enjoyable than projects where I'm faced with a new element or two to puzzle out. That's part of what makes me like doing a project. On the other hand, if there are too many of these or if only a few but ones requiring a very significant commitment to new learning, then the client should probably select someone else. And if I'm not entirely sure, myself, I should inform the client as soon as I know that it is too much, too fast and that they should find someone else, pronto.

It's pretty much given that if I have done a project twice before, the third time and beyond I pretty much know what it will take and about how long. In those cases, the only interesting thing occurs when a client wants it in half the time in which I think I can do it. When I'm pushing hard on a difficult schedule to keep it does get somewhat "interesting." Other than that, projects with opportunities for solving new problems or learning some new areas are also interesting and, similarly, also difficult to nail down some of the effort/time because of those very same uncertainties -- whether they be because I'm compressing schedules and just aren't sure by how much or because these novelties are difficult to nail down until __after__ I've done them once or twice.

I try to follow a few guides -- though it's always a battle to do as well as I'd like to.

One is to haul forward to the earliest possible moment anything that is truly a deal-breaker issue. If there is something central to the project for which I don't know the answers and where that uncertainty is relatively large, it's important to pull those to the beginning of the project and find out what the lay of the land is. It may very well be the case that the problem dissolves quickly and becomes known enough that I can reset it back into the project plan where I'd rather have it. So I nail that down and put it to bed. Or else, at least, the client finds out early on that either I am not the right person or else that the issue is large enough that we need to make a decision about it. Pulling these things to the front gives the client the better chance to make important decisions as early as possible and before committing undue significant resources into it. They can decide to get out early, if the ante gets too rich for their blood.

Another is that I kind of invert the meaning of milestones. There was a time when, as an employee, I provided the better I was able for an

8-hour shift and then went home. If a schedule was slipping, well... it was slipping. We simply had to re-adjust the charts to reflect the developing reality. I put in my time and that was that. (If they wanted to pay for more, that was their business, I suppose.) As a consultant, though, it's not like that at all. The milestones are rock hard. They don't move. I do the best I can in setting them and allowing 2 standard deviations of risked time as a margin of error in setting how long they will take. But once I set them down, they don't go anywhere unless I have absolutely no choice in the matter. What I do instead is use the progress towards the next milestone control my time. If I'm running early, I might supply less hours per day or week, shorten the project schedule, or roll some hours forward as padding for some of those more uncertain areas. If I'm running late, I work harder, later, longer. Instead of 30-40 hours/week, I might work 80. And I push the hardest as soon as I'm seeing the problem, supplying as many hours as I can manage right away. And if the whole thing still doesn't crack by the time the milestone presents itself, then and only then do I slip it. But by that time, I'm sinking in everything I can get away with, already.

In short, I use milestones to control my applied time. I don't assume they are perfectly (or even well) set nor do I allow myself to imagine the idea that setting schedule is simply something I'm not good at and need to get better at. Yes, I'm always trying to get better at that. But when a project is in hand, that isn't the time. I then use the milestones to control my personal schedule and will move everything else out of the way, if needed.

The milestone controls me.

The rest seems to be vapor to me. A project is, ultimately, either satisfactorily done or not. All I can do is focus on putting one foot in front of another towards that end. Others who need to worry about larger team project issues need only know one thing from me -- that I will stick by my own established objectives, come hell or high water, and let them slip only when there is no other option. And even then I will continue to struggle against shortening up later items to make the time back and get back onto the milestones I've got ahead of me. So I give them the better they can hope for in their planning -- milestones they can bank on unless no human level of effort on my part could have achieved it.

I don't think there is a holy grail here. You can only control yourself. You cannot control physics, you cannot control others, you cannot control accidents, you cannot say what the right solution to new problems may finally resolve into, etc. But you can control yourself. It's the one thing you have that you do control better than anything else. So I let milestones impact me at that point. In the end, that is the best way to also communicate with clients. They get something they can basically bank on. And that allows them to do better in other aspects they are concerned about and greatly reduces the need for complex, hard-to-manage-and-communicate, real-time charting of progress. Keeps things real simple and simple is good.

Jon

Reply to
Jon Kirwan

Is the user annoyed of the progress indicator or annoyed by slowness of the process itself ?

Reducing the bloatness and better design will decease the slowness, which also will reduce the annoyance (of the indicator).

Some absolute time indicator would be most useful, since the user can then decide if he/she will remain seated and wait for the result or fetch a cup of coffee or go outside to smoke a cigarette.

Paul

Reply to
Paul Keinanen

I am not speaking in "specifics" but, rather, "generalities". My question is intended to address *how* information is conveyed to the user and *what* the user expects from that information.

E.g., I suspect he cares very little about how many "bytes are transfered". Or, how many database records have been processed. I'm *guessing* what he really wants to know is:

- is this thing still working on my task?

- when will it be done?

The former question gives him reassurance that the process hasn't crashed or become *stuck* for some reason (it seems many devices provide very little feedback as to what they are *actually* doing at any given time).

The latter allows him to decide when he can expect to get on with what he *wants* to do (since I doubt "waiting" is something that he *wants* to do! :> )

Remember, this is c.a.e. The "thing" that he might be waiting for could very well be something he is *carrying* with him. (I mention this to make sure we don't focus on "desktop PC"-type applications of progress indicators)

Reply to
D Yuniskis

formatting link

Reply to
Andy Sinclair

I'd be happy as a clam if I could just see an hourglass when the machine is compute-bound. Virtually no Windows apps give you an accurate indication and it's one of my major gripes.

I end up opening task manager > performance while running most of my cad programs just to see WTF is going on....

Reply to
Jim Stewart

I think that windows offloads (?) things like animated hourglasses to the "OS" (?). E.g., I suspect the application can HALT and the hourglass will keep chugging along (Disclaimer: I don't write windows aps so I have no idea about the API).

But, that just tells you "thinking". It doesn't tell you how *much* thinkning it has to do.

(Again, this is c.a.e so try to think of these indicators as they apply to "devices" and not "desktop apps". What would you do if your cell phone put up a progress bar that *stalled* randomly, etc.?)

I use "Process Explorer" to try to get a finer-grained view of what's happenning. And top(1) on my Eunices.

Reply to
D Yuniskis

In several systems I have done I have used the usual "-", "\", "|" and "/" characters spinning on the spot for tasks that would take some time. The selection of the next character in the set was keyed to the end of a specific repetitive portion of the task. This is a real indication of activity and is suitable for a user interface on a small embedded system.

--
********************************************************************
Paul E. Bennett...............
Forth based HIDECS Consultancy
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-510979
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************
Reply to
Paul E Bennett

We used that technique a lot on terminal based systems and Sun systems still use it during initial boot. You can get a bargraph percentage complete effect the same way with a bit of ingenuity.

Seems to me that the most important thing is always to communicate. Never leave the screen static, so that the user thinks the system has crashed. On modern systems, pc software installs have this stuff down to a fine art...

Regards,

Chris

Reply to
ChrisQ

Mine does. When sending an SMS from iPhone, there's a progress bar that (in my crappy coverage area) always slows down as it reaches the end because it's calibrated for a "typical" time to deliver the message to the tower.

When sending an SMS from earlier (and maybe current) Nokia phones, you just get a barberpole - not quite a progress bar!

Reply to
larwe

Your question didn't really strike me as one searching for special-case answers. It was almost obsessively generic.

That's mostly because processor load has nothing to do with it, of course. Wait until some other activity begins hogging one of the media involved, and see if that indicator can still be considered perfect.

That's not a usable definition, because it's not measurable, at least not within generally affordable effort.

All kinds of factors influence the estimated completion time. Some of those factors will change as the process chugs along, others would take longer to measure than the process itself does.

That would only work if someone managed to create and fine-tune an accurate model of the whole process' time use that takes into account all but the most insignificant factors affecting it. That's generally impossible.

So what you get instead is the crude old "let's assume that the ratio of per second achieved this far into the process will stay constant until completion" technique. And that fails spectacularly if the process consists of more than one type of work, so you can't make do with a single .

No, you can't, because generally you _don't_know_ why a given part of the work takes as long as it does. So you can't update the estimate meaningfully with any reliability.

Not usefully.

Reply to
Hans-Bernhard Bröker

Yes. But, hard to ensure each "step" is roughly the same size (temporally). Of course, for "activity" this isn't as important. But, for (cumulative) "progress" it becomes moreso.

I think one of the probelsm with (existing) progress indicators is that the interface to the indicator has no real global knowledge of the entire process. Like Paul's implementation, it just shows "progress" in some general sense and something else (hopefully) "counts" these events and uses them to update the overall indicator.

IIRC, some of the Sun installers spawn a task that just "runs an (idiot) indicator" without any ties to the "worker task". I.e., you can "kill -9" the worker task and the indicator will sit there merrily "indicating" (activity). I'd have to dig through my journals to see which installer it was...

But PC installers subscribe to the same sort of philosophy that the paper Nils cited seems to address. I.e., entertain the user so he isn't (as) annoyed with the slow progress. And, they are notorious for their nonlinearity. I.e., take away all of the accompanying graphics and other distractions and you don't really have any *useful* information on actual progress until you are actually *done*.

Reply to
D Yuniskis

The "generic" aspect of the question has to do with identifying those things that the user considers "useful information" upon which to base a progress indicator's design.

E.g., I contend that the user is *only* interested in "time" (in the generic answer). he cares not how many bytes have been transfered, how many records have been sorted, how much travel an articulated mechanism travels, etc. Sure, it might be interesting to some users but most users are only looking at that information as an analog to getting real information about the related *time*.

That depends on the characteristics of the two media involved! And, what the "other activity" makes for demands on the processor -- as well as how the OS schedules those respective activities.

E.g., try copying a file from A to B (on some medium being EXCLUSIVELY used by that copy task) while you are trying to encode an MPEG video. I.e., the MPEG task is typically CPU bound (not I/O bound) yet it's abuse of the processor spills over into the run time required for the copy task.

However, the copy task can easily update its time estimate to reflect this "processor unavailability".

If you start thinnking of the sorts of things that an

*embedded* device is likely to be called upon to do, you'll find these "desktop" analogies poor examples.

Even in that case, the copy task could come up with a more realistic (accurate) indication. If it *doesn't*, it is because there has been very little effort placed on conveying *accurate* and *useful* information to the user (e.g., clinging to "bytes moved" as a metric instead of "time spent")

Why not? I can measure time to all sorts of precision for very little cost. :>

The problem is folks haven't put the effort into quantifying how their code works. It's an afterthought instead of something that is planned from the beginning.

Also, remember that this *is* c.a.e and you have a "device" that does very few *particular* things and could *chose* to track how it has done those things in the past.

Returning to the file copy example, that task should be able, given simply the *size* of the file to be copied, to come up with a bounded estimate on the time required *bfore* it even open()'s the file. In the simplest case, it could track "average transfer rate" every time it is invoked and note the minimum and maximum rates encountered (presumably, the maximum would occur when data was nicely laid out on the medium while the minimum rate would occur for poorly fragmented files in which lots of seeks were necessary). The *first* estimate that the task displays to the user should have a pretty good accuracy even before the task begins. (i.e., the user should be able to decide *if* he wants to perform the copy just by examining the "time commitment" indicated)

Again, there is no need for this. If you watch (yourself) every time you do "something", you should be able to quantify your performance. After all, you are going to be doing this sort of thing for the rest of your electronic life!

E.g., I *know* copying files off my thumb drive takes much longer on *this* machine than on my development machine -- simply because this machine only has USB1.1 ports. Surely "copy" could tune itself (with all the CPU resources available on these desktop boxes!) and give me better information than it does...

Why? "Impossible" is a cop-out. It might be something you aren't inspired to "solve" but, if that was your stated

*job*, I'm sure you would come up with a reliable way of doing it!

I've got a robotic actuator that has to move a high speed cutting tool around a large "cutting arena" following a path set forth by the operator (and unknown to the "machine" until that point of time).

The *application* is responsible for moving the actuator.

*It* decides how quickly the motors are operated. *It* decides how quickly the actuator can be accelerated to a given "linear" speed and how quickly it can decelerate or brake that actuator. It knows that a diagonal with a 3" displacement along one axis and 4" along the other represents *exacly* 5" of motion. Etc. It knows where along the path the actuator might have to return to its "staging area" to have the cutting head changed. And how long it takes *it* to change from cutting head #13 to #85. And then to return back to the place where the new cutting head must be employed.

And, it knows all of these things BEFORE IT EVEN STARTS WORKING ON THE PROBLEM!

Why can't it predict -- to a high degree of accuracy -- the total trip time for the actuator to execute this trajectory?

But that's because you were lazy and attempted to model all work with a single factor. That's like deciding to use addition for all arithmetic operations -- even when it is NOT the operation required! Then, when someone complains that the result of "14 / 9" was incorrect (23?), you just shrugged your shoulders.

Because using addition to model division is a foolish design choice. If you were *tasked* with quantifying the time for a particular activity, you would undoubtedly NOT opt for a "one size (factor) fits all" model.

Why don't you? Because you *chose* not to make this information available to "clients" in your application. It wasn't important to the "problem" (subproblem) you were trying to address. And, when you tried to squeeze in a progress indicator AFTER THE FACT, it was way too much work to reinstrument the software to provide the information that you would *need*.

E.g., I implement all of my drivers as processes. As such, they continue to "exist" and "operate" even after the open/read/write/close has returned to its caller. As a result, those drivers can provide more information to clients and prospective clients.

So, for example, I can "ask" my audio (out) driver what its backlog is (i.e., how much data is queued waiting to be "played") or what its actual throughput is *and* what it sees *my* "writing rate" to be. Based on those metrics, I can elect to deepen buffers to *guarantee* that the buffers don't underrun, etc.

By the same token, I can ask a "disk" driver what sort of backlog of scheduled activities (deferred writes, etc.) it has queued and, from that, determine how long it is likely to take for this set of bytes to get moved onto the actual media (I wouldn't even have to make this calculation myself; I could just ask the driver to do that prediction *for* me)

Because having these metrics available helps me ensure I meet all of my deadlines. They also can help me provide better estimates to the user.

You don't have to go to this extreme to get the same sort of useful data. You just need to decide that it is important top you (and your users!)

You're assuming you know what the indicator is and the nature of the task itself. E.g., Apple (?) uses a barberpole progress bar on some products. The length of the bar reflects the progress made. The fact that it is still "rotating" can indicate that there is still activity occuring.

MS uses a "segmented" progress bar. This allows the user to see small changes in the length of that bar more easily. I.e., a new segment "suddenly" appears. Those segments could flash to indicate activity, etc.

I don't consider any of these *good* choices. But, they clearly show that *one* indicator can serve multiple purposes.

Note how many FTP clients will present progress to the user in multiple forms:

- bytes transferred (out of total)

- elapsed time / time remaining

- % complete

- instantaneous transfer rate

Here, the approach is to provide lots of information instead of *good* information. I.e., it relies on the user to notice the transfer is stalled, consult the "work (byte) remaining" to determine how close to finished the process is, and to project their own "likely" time of completion.

Obviously, programs realize users are interested in their progress. But, they don't seem to have invested much serious effort in providing *good* answers. "3 + 5 = 6... more or less"

Reply to
D Yuniskis

Naw, that was the clothes dryer -- your laundry is done! ;-)

Hmmm... as applies to my question, who is "yourself"? The developer sorting out how to do things so that the user can best be informed of its progress? Or, the

*user* disciplining himself as to what to expect from the device? [given this uncertainty, my comments that follow may appear to be coming from two different directions :< ]

Of course! The first time you do something, it is *fun* (i.e., an adventure, new experiences, etc.).

The second time it is *interesting* (you can call upon your initial experience and "fix" what you learned from that approach).

The third time approaches "perfection" (not really, but you are far enough up on the curve that it is rapidly becoming asymptotic) and it becomes more of "something to get done"...

Speaking in terms of my *question*, though, I have a couple of simple mantras for user interfaces:

- don't ask the user questions

- never forget what the user tells you

- think intelligently about your past experiences and how you can learn to make the *user's* experience better (here, "your experiences" are the experiences of the *software* at run time) (there are more but these are the "biggies")

Yup, as above.

Projecting this comment onto the progress indicator issue, I think that most "tasks" can easily "learn" their own time characteristics (again, I consider time to be the bottom line on user interaction). After all, that is *all* that task will do from now to the time it is "erased".

*Or*, when something that you rely upon proves to be unreliable! E.g., when your emulator's power supply goes south and you are faced with the task of debugging without the emulator (possible but it changes your productivity) or spending unplanned time troubleshooting the power supply. :>

While I say this tongue-in-cheek, it brings up another good point wrt progress indicators and tasks tracking their own "progress rates"...

I recently was wiping some hard drives (prior to disposal). On one machine, the "write rate" was abysmal -- only about

5MB/s. I'm sure that this figure was accurate. And, therefore, the estinmated completion time was *also* accurate. But, the actual computed rate was just not what I would expect from the machine (2.6G Athlon w/ ATA100). Instead of simply accepting this *accurate* progress indicator, I realized there had to be something wrong with the machine to give such terrible performance!

I.e., a task that knows what its performance *should* be can provide the system and/or user with insights as to when things aren't working right -- though being able to diagnose what the actual probelsm is may be considerably more difficult.

Learn to live without sleep! :>

Yes. Note the article that Nils cited makes similar suggestions.

Exactly. I can recall the first time I "got clever" and tried to copy a disk's contents to another machine over the network. At the time, I was running a 10Base2 network and the drive was only 4G. Previously, I had been *tickled* with how easy and fast it was to move things between machines -- compared to SneakerNet. Suddenly, the effect of scale was staring me in the face -- a 4G "file" takes quite a while to transfer at ~1MB/sec! Seeing the initial time estimate (which *I* could done on the back of a napkin had I thought of doing so!) allowed me to quickly decide that this was NOT the way to go! (instead, I simply unmounted the drive and hand carried it to the destination machine so I could copy it directly -- at bus speeds instead of network speeds)

Yes, but projecting this onto the "progress indicator" issue, there isnt any real way that a task can *insist* on meeting its deadlines (unless it *knows* those estimates are accurate and can influence the scheduling of resources in the system, etc.)

A task, of course, can do nothing other than "its best" (even if it is poorly implemented!). OTOH, the huge advantage it has is that it is 100% repeatable. It *will* work the same way each time. So, it should be much easier to measure and rely on.

Reply to
D Yuniskis

In my experience most Windows installers get this hopelessly wrong. There are ahs to be some kind of estimation made for the time taken by different phases - for example, one phase may be CPU bound, another disk or network bound. Since the balance between the performance of different components can't be anticipated it is impossible to say how much time each phases takes. Also some phases don't seem to be counted at all. How many installs have you seen where the progress bar moves quite quickly but takes an eternity to initally shift from 0%, or at the other end move from 99% or even 100% to completion?

My preference would be for textual descriptions of activity: e.g. a simple message like "Parsing archive header" for a phases expected to be fairly brief, or for a longer phases something with a further indication of progress: "Processed 2 out of 2721524 thingies..." Of course, that can be misleading too: if thingy processing takes an hour it may be tempting to think the job is almost done when it is completed, when in reality you may only be

10% of the way through.
--
Andrew Smallshaw
andrews@sdf.lonestar.org
Reply to
Andrew Smallshaw

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.