Op Tue, 02 Dec 2014 16:31:52 +0100 schreef Oliver Betz :
Don't blame the debuggers, by far the most developers don't even use high-end "sophisticated" debuggers with can make full use of the on-chip debug logic and provide reliable CPU trace. Today's hardware and software are way more complex, however. Below are some cynical remarks.
We use hardware with badly documented and/or broken peripherals, which requires debugging. We use libraries with badly documented and/or broken API's, which requires debugging. We use developers who can't RTFM and/or perform proper problem analysis, because the good ones were taken by those with (government) funding.
(Remove the obvious prefix to reply privately.)
Gemaakt met Opera's e-mailprogramma: http://www.opera.com/mail/
I think it depends a lot on the developer. Some like to do their homework "up front". Others start righting code before the marketing folks have even finished describing their fantasies...
My first commercial project had a build cycle of almost *4* hours! Three developers sharing a codebase on 8" floppies that had to fit in 12KB of EPROM (that's *K*B) each of which (QTY 6) took ~20 minutes to program.
Access to the hardware (development tools, prototype, etc.) forced you to spend a lot of time with "hard copy" -- *planning* how you would verify the code's execution *and* how you would get feedback from the system (no "debugger", no serial port, etc. just a collection of digital I/O's that you could try to repurpose for debugging use -- if that use wouldn't then render the *intended* use inoperable!)
Lots of discipline, cooperation and communication so your individual pieces of code weren't routinely stomping on each other's progress.
Tying up a few hundred dollars of EPROM for each turn of the crank meant you didn't keep old versions lying around to re-evaluate: plug them in, observe the results, then move them under the UV light so they'll be ready to burn when the *other* set are done!
Before that, punching cards and submitting jobs "once a day" -- only to find you had a JCL card missing/out of place and the whole job ABEND'ed.
Of necessity (i.e., if you wanted to get *any* work done in a given amount of time given the restrictions on the hardware available), you thought more to make every effort "yield results".
The last ICE I bought was ~$25,000. However, it was pretty capable given the *lack* of debug support in the silicon of that era.
Today, you can prototype code *without* hardware! I now write most of my code before I've even formalized schematics!
I budget 40% of a project for specification/design; 20% for coding; and 40% for testing/verification. To me, this is intuitive: Figure out what you want to do IN ALL CASES and be able to verify that it behaves as intended in each of those cases. The coding is just the boring "middle work".
I know friends who are always looking for faster machines to shrink their code-compile-debug cycle time -- but, they also tend to be the sorts who just stumble on something that *looks* like it may be a bug, change it, rebuild and use the "resulting performance" (which may coincidentally be misleading!) to "do their thinking"... i.e., if it now (appears) to work, then that *must* have been the problem, right??
Are you thumping your cane on the floor as you complain about kids these days, and how the world is degenerating into crap because of them?
I think if you look, you'll probably find similar complaints in the Bible, or perhaps written in hieroglyphics on stelae in Egypt someplace.
Times change. The details of the screwups change. The nature of the thinking leading to the screwups doesn't change, nor does the basic solutions.
On a related note, I've been taking more and more to test driven development over the last decade, because it seems to help my development a lot. In the last two years I've gotten much more strict about doing everything I possibly can under TDD, even if I have to add hardware abstraction layers to do it.
My code has gotten better as a consequence. Not because so much more of my code is -- perforce -- unit tested, but because the level of detail of testing that TDD calls upon you to do forces you to think about what you're doing in much greater detail, and to verify that your head is screwed on straight as you did the thinking.
Before modern tools, much more time was spent putting the bugs *into* the code.
I can only recall a couple of instances of embedded devices (out of the thousands I've ever encountered) that had a human interface element, which did not have flaws and quirks in the interface. HP and Sony seem to be able to mostly avoid this, but most embedded programmers are hopeless. How hard can it be to code a microwave oven timer to work in a sane and correct fashion? Yet I've never seen a single one.
Eventually the embedded world will catch up to the leading edge of the web world, which uses TDD/BDD to code end-user expectations as tests that run automatically and quickly, which (as well as actually causing thought about UI state transitions) avoids most debugging.
Most embedded devices (until recently with the abundance of RELATIVELY inexpensive graphical displays) have tightly constrained hardware. And, a design mentality ("from above") that discourages adding any recurring costs that don't have direct ($$) benefits.
E.g., our washer/dryer still relies on 7 segment displays -- and a few "indicators". As a result, you get silly "messages" like "nF", "dO", etc. instead of something more informative and HELPFUL like "No Fill. Are you sure the water valves have been opened?"
This is *so* 1970's....
C'mon... how many millions of these things (different models, etc.) are they selling WORLDWIDE? Yet, couldn't afford even a set of 10 segment displays? Or, just *two* "digits"-worth?
Or, indicators for each of these conditions?
"Ah, but what happens when we want to convey an error/condition that we haven't yet ANTICIPATED??"
I've encountered two "camps" regarding how users are considered in the design process.
One essentially ignores them and concentrates on trying to make the product work. I.e., as if just getting ANYTHING out the door will be a major accomplishment -- "worry about the details (users), later"
The other tries to understand the user's needs and thinking. Then, adopts features and mechanisms that "fit" with this understanding. While this *seems* better (at least they are considering the user as part of the "system"), it often results in what I call "The Accountant Mentality": where the user is expected to "perform" in a fixed, "anticipated" manner. There is (allegedly) some logic to The Interface and it's just a question of making it as easy as possible for the user to *accept*/adapt that logic.
"Power Level, 9; Time, 1 0 0; START"
For the past several projects, I've pursued a different approach: try to let the user do what he wants and *infer* what he *intends*. I.e., encode
*minimal* prerequisites that allow the application to guide the user along. E.g., let earlier actions refine the constraints on later ones...
I have found there is a LARGE class of users that are VERY uncomfortable with this sort of approach! They want a scripted interface: do this, then this, then that. The freedom I present leaves them uncertain of every action they take -- despite demonstrating the fact that they won't be allowed to "screw up" (if you forgot something, you'll be reminded WHEN YOU TRY TO CONTINUE PAST THE POINT WHERE IT IS REFERENCED)
I've seen a lot of brain-dead web apps that force you to take steps in a very specific sequence -- even when there is no logical reason for doing so.
Or, walk you through a series of "screens" (pages) only to discover you want/need to go back and change something on screen #2... but, the only way to do that is to quit and start over!
"Why do I have to fill in my name, address, etc. before I can even get a GUESSTIMATE of total cost of this item? Beyond a ZIP code, what more do you need to determine what local tax rates apply and shipping costs?? Heck, you should be able to give me a TYPICAL RANGE of shipping costs (with a footnote that qualifies that estimate so I can see if it is likely to apply to me BEFORE I've even provided a ZIP code. I mean, how many gigabytes and MIPS do you have running this little app??"
Give me a hundred MIPS or so and a even a few MB, a graphic display, etc. and you'll be surprised how elegant that Microwave oven interface can be! :>
Conversely, let your web app have a few KB and a few KIPS and a few
7-segment displays and tell me how delightful *that* experience will be!
and designers of appliances should test actual users to see how they respond?
I was just noting in another newsgroup about a CSE seminar:
"The Programming Language Wars" on how little testing is done to see how users use programming language features.
One test he did was to compare an existing language, using actual people to do the test, with a similar language using random ASCII characters in place of keywords. See:
(It should work, even without an ACM subscription.)
If that doesn't work, or you want to see other papers:
Well, I might think that when no water comes out people figure to check the valves, but you never know.
But for cost-sensitive items, every cent counts.
I have thought before about how many products come out before the theory is well enough understood. So, yes, in the beginning it is often true that getting anything out is a major accomplishment.
I still have a microwave with knobs. I always forget which order to put the power and time when using digital versions. I can change the power level while it is running, just turn a knob.
The above link also has a paper comparing static typing vs. dynamic typing for programming languages. The latter might correspond to your *infer* case. It seems that users do better with static typing.
Like Jack, some of us have learnt the right way to approach any project and do, indeed, do a lot of up-front thinking. It is a trait I have tried to instil in all the apprentices and graduate student intake at the companies I have been involved in over the years.
Noting that Don budgets 40% of his time to up-front specification resolution and project approach planning, I consider this to be on the light side. My own figure is closer to 60% of the total project time is getting the spec right (including testing and debugging the spec). During this period the spec can change quite dramatically as problems are highlighted, identifying requirements that could lead to potentially unsafe operations (I am after all in the High Integrity Systems market).
The benefit of all this up-front work is that the design task becomes much more straight forward and I can then produce decent certifiable electronics hardware and software. Within the resolution of the requirements specification I will have invested in some significant portion of "play- time". This "play-time" is the exploration of small aspects of the requirements with the aim of improving the requirements specification. Any prototype code or design produced at this stage is milked for information only for the purpose of requirement specification improvement. It is then scrapped.
Of the remaining 40% of the project time-table, we test as we build as much as is practicable. The tests specifications will have come out of 60% block and satisfying those tests completely means your design is fulfilling the specifications. In this latter period we might see a few (usually minor) gotcha's but then, no development process will be absolutely perfect.
Errors that creep into projects are quite language and technology agnostic.
44% of the projects errors will be inserted within the specification stage (See "Out of Control by the UK Health and Safety Executive). This is why it makes sense to remove those errors before you start the design effort.
Of course, in order to remain in control of the project effort and ensure that the team are moving to the same overall plan, you need to have a decently robust Development Process in place. CMMI level 3 is the bare minimum your process should support. Higher, though is better. Correct by Construction is the best (and is improvement beyond CMMI level 5).
Paul E. Bennett IEng MIET.....
YES! We would smuggle a prototype game out to an arcade and discretely sit back and watch how players (using THEIR REAL money) reacted to it. Is it too difficult? Too easy? How quickly do they pick up the proper strategies to win? How easily are they distracted? What "wows" them? etc.
Thanks, I'll have a look!
If you don't understand your users, you don't understand the *problem*. So, why are you trying to solve "it" *instead* of understanding it? Or, are you hoping your users' complaints will help you understand it??
If you sit there and *watch*/wait, you can figure this out. But, walking up to it after it has started beeping at you means you have to effectively unload it to see if anything is damp.
Why put a fuel gauge in a car? When you're out of fuel, you'll (eventually) figure it out! :>
Why display "dO" or "nF" at all? Just put a big red light that says "user screwed up"!
Yup. If the alternative is to replace existing product, then its awful easy to rationalize a "kludge" approach:
"We'll let the user enter a *time* (of day) of 99:99:99 to access this special feature..."
We had a $2K product many years ago and someone ventured the idea of adding a "button to do...". The laughter was intense: "Good luck with THAT! What's it gonna cost... 25c??"
It's often a reflection of the folks pushing for the product not understanding their market, either!
I saw a statistic (that I won't quote out of fear of misremembering the specifics) that some *huge* number (percentage) of products are returned because the user couldn't figure out how to use them (comfortably).
How large would that number have to be if it was "in-warranty repairs" before it would raise some alarms??
Exactly. When shopping for a microwave oven for my in-laws many years ago, I gravitated towards digital keypad; wife said, "No, this big knob is something they can understand!"
There is no way to "soak" in our current washing machine. Salesman argues that "soak" doesn't make sense for a front-loader. I counter, "So, you're letting the technology determine what features the user requires? Why can't 'soak' simply be: fill, agitate, pause, repeat?"
Once a cycle is started, the wash/rinse temperatures can't be changed. Nor the soil level, spin speed, etc.
"How about a CLEAR button if you can't be clever enough to let me change the rinse temperature ANY TIME PRIOR TO THE RINSE CYCLE? Oh! The POWER button acts as a clear button! Great! I should shut my TV off each time I want to change channels..."
Can't have a hot rinse. And, rinse temperature can not exceed wash temperature. Can't open the door except when *it* decides you should.
*If* you can address all of the user's needs, great! But, when you can't, then you leave him/her trying to figure out how to do something that they *should* be able to do.
E.g., turn off the cold water supply to force a "Hot wash/Warm rinse" to be "Hot/Hot"... Or, spin the (mechanical) dial around to "Rinse" skipping over the wash portion of the cycle... Or, open the lid to pull out a few items *before* the spin cycle... or...
Note, none of the things that the washer is "doing for me" are really making my laundry experience any more pleasant or quicker. So anything it is doing poorly or NOT allowing me to do is just making that experience
I don't mean to single out washer/dryers. As Clifford said, damn near every THING that interacts with people has some questionable concept of what that relationship should be -- The Accountant Mentality ("just do it THIS way...")
The gas pump at our local Costco failed to read the mag stripe on my credit card the other day. Message appears saying "try again" (or thereabouts). Put credit card in, again. Ah! No, it wants me to restart the entire transaction! Beginning with my
*membership* card (it produces the same "try again" message if your membership card isn't read correctly.
Gee, a fully graphic COLOR display and you couldn't spare a couple of extra characters?? "Please start over" (acknowledging that there *may* be some merit in NOT leaving the transaction "pending")
Even that can be tricky these days. My dad and his wife had their washer replaced a few years ago, and they ended up with the "anti-flood" hoses (I don't think those are a good idea, and neither apparently do most washing machine manufacturers, but a lot of them get installed).
If you've never encountered one, these have a built in safety valve that detects a sudden pressure drop and closes. The idea is that if the hose bursts or becomes disconnected at the washing machine end, it'll shut off to prevent a flood. At least in theory a potentially useful innovation, but I've never seen that particular problem.
Anyway, one thing that you need to be very careful of on these is that when you're turning on the valve, you have to turn it on slowly, or the sudden pressure change will trigger the safety valve. You can even cause the problem if you shut of the whole house system, and don't reopen the main valve with the taps in the utility basin open. And there's (of course) no indication that the safety valve has triggered. Resetting the value is easy, although totally unintuitive if you don't know the secret (you have to unscrew the hose connection at the plumbing end to drop the pressure on that side of the valve, at which point in releases).
So someone managed to trigger the safety valve, and the washer just sat there blinking an obscure two character code. And my dad (who's far from mechanically inept), was ready to call a plumber or service for the washer or...
Fortunately he called me first.
Yeah, just a few more LEDs would allow a scrolling error message. Or heck, print the damn error codes someplace - underside of the lid, perhaps.
There are pressures on the manufacturer that are different from the consumer. My washer has a whole modem (OK, half a modem), so that it can send diagnostic data to the service center (you hit a magic sequence of buttons when the tech tells you to, then hold your phone over the indicated spot, and you listen to modem noises for 10-20 seconds). Obviously they felt that was a worthwhile feature to install.
Perhaps it's only with front loaders. Many HE top loaders do have a soak cycle.
I haven't seen a hot rinse cycle on a washer in decades. Not even warm. This may be an energy conservation thing.
As to the doors, that's tougher. Certainly HE washers (both front and top loaders) have dangerously fast spin cycles, I can certainly see not wanting to allow access while that's going on. Front loaders have the additional problem of needing to be drained in many cases before opening the door. The top-loading HEs seem to be a lot less picky about that. I know mine will unlock in a few seconds (~15 if it's in the middle of a spin, considerably less at other times) after hitting "pause".
And yet you then run the risk of a seriously complicating the user interface. As you mentioned, tons of products get returned because they're too hard to use. Even ones that really aren't. It's little wonder the manufacturers *don't* want to give you extra flexibility.
And some of these things have safety implications (the door lock, for example). The manufacturers are going to be loathe to give you any way of fiddling with that. On the flip side some of these problems are because of the required (safety, or otherwise) features.
OTOH, if you now add an Internet interface to your washer, you could provide a way to program your custom cycle. And then the next stuxnet virus will come along and destroy your washer (just after separating the isotopes in your clothes). I'm pretty sure we can't win here.
I've seen that happen, causing a fair amount of water damage in an apartment. The safety valve sounds like a good idea and I doubt they'd have accepted the additional cost if flooding wasn't a problem.
The smarter way of doing this is to put (electrically operated) valves
*at* the supply and remove the pressure from the flexible hoses when not needed. E.g., even if open-loop based on *time*.
How many folks forget to turn the valves off after each use? How
*inconvenient* is that (i.e., why not *fix* it?!)
We had a fill line rupture to one of the toilets one evening. Thankfully, the sound of rushing water was enough to wake me and I was able to catch it before the bathroom floor was even "completely wet" -- let alone FLOODED!
The armored hoses for the washer failed within the first day of use. How much of this was due to crappy quality vs. the "too high" municipal water pressure (that this event led me to discover)...
Exactly. Not in the *manual* -- which gets FILED AWAY with the warranty stuff, etc. (how often do you need to consult a manual to use a washing machine -- discounting ERROR CODES??).
And, if you're going to print up a cheat sheet, then error *numbers* are probably more intuitive than crude "7 segment LETTERS"
(e.g., why "dO" and not "do"?)
They've got icons for "door locked", "add clothes", wash, rinse, spin, finish, childproof, etc.
One of the best advances in photocopiers was the use of graphic icons to indicate *where* the jam/problem was detected. Instead of a code that says "fuser", etc. (how many John Doe's know what a fuser is, where it's located, etc.)?
Exactly -- but if the user experience suffers, then it eventually comes back as a cost to the manufacturer.
E.g., the door latch in the washer broke after ~18 months (light use... just two of us, here). Flimsy piece of snap-together plastic that gets beaten on (even by the gentlest of users!) by this hefty door slapping into it.
Service call was comp'ed (out of warranty). How much "extra plastic" could they have put into EVERY door latch for the price of the comp'd service calls nationwide? Worse, yet... what of the cost in lost sales as folks see this as a sign of poor quality and opt to replace with something from another manufacturer??
Ours has a diagnostic mode that you can invoke and view results on the front panel. Most are meaningless to Joe Average User (issues relating to the VFD drive, calibration values for water level sense, etc.). Undoubtedly intended for a technician (you can also coax the machine through it's cycle in an accelerated fashion by driving it with the front panel buttons).
It *was* helpful in being able to cite the exact number of wash cycles the machine had undergone prior to the door failure for my complaint letter! :>
Front vs. top shouldn't alter what a user may need to do with regard to laundry! :> Granted, you can't "fill a tub" with water and let clothes just *sit* there when you have a front loader. But, you can probably achieve something comparable with the fill/agitate/pause/repeat algorithm I cited above. Just do it for longer than you would for a regular wash cycle.
And, since the machine knows the current orientation of the drum, it can strive to keep "flipping" the contents to ensure they all are uniformly exposed to the water at the bottom!
You can't even *trick* it into giving you a hot rinse (by turning off the cold water). It complains (with the same "nF" error that it would have given you *initially* had you failed to turn on the HOT water supply (because it now wants COLD -- to make WARM -- and can't get it!)
If you don't hear it's little bleet, it will gladly let your wash sit there, mid-cycle, because it hasn't got what it thinks it *needs* (and, why didn't it test this BEFORE it started the wash cycle so it could tell you when it KNEW you were present?)
Old top loaders had similar problems. They addressed them with clutches and brakes -- open lid and tub (fully loaded) would stop spinning almost before you could get it fully raised!
I can't recall ever seeing the water level "on" the window. (the drum isn't perfectly horizontal but, rather, slopes downward towards the back) I.e., as long as any clothes that happened to have climbed up the sides (while in motion) have had a chance to settle back down, I can't see any water getting out.
OTOH, cleaning the *filter* located *at* floor level ALWAYS results in water on the floor -- even if you slide a plate or cookie sheet under the washer, first!
Give me the same interface that was common previously. A *dial* to select cycles AND *where* to begin *in* the cycle! So, if I want to skip directly to the rinse portion of the gentle cycle, I *can*. I don't need to read a manual if the controls all look REALLY SIMILAR to what manufacturers have been cranking out for decades!
[This is the "big knob" vs. "keypad" argument on microwave ovens.]
Instead, they've all got a big knob that's little more than a
12-way selector switch to pick *a* cycle -- and *it* decides what to do from there. I guess they figured the OLD knob was way too complicated to use and wanted to make things easier on us all! :>
I don't think you have to "expose" automation in order to benefit from it. Eventually, I plan on cannabilizing the washer *and* dryer and moving the control algorithm "outside the box". I can't see any other way of getting information *out* of the appliance. And, no way am I capable of producing that mechanism in "quantity one" for anything approaching what I can buy it! :>
Note that this is contrary to the current "iterative" approaches.
From your comments, below, you appear to include some amount of "research" in your up-front costs. I won't start on a spec/design until I already know what the technology will be, etc.
E.g., I've got roughed out, working prototypes of my gesture recognizer, speech synthesizers, distributed time infrastructure, network speakers, etc. Yet, haven't begun many of their formal specifications.
"Make one to throw away" Learn from it. Then, use that knowledge to come up with the "right" specification and design.
E.g., I can't formalize the timing infrastructure because there are no real "existing" standards and metrics that I can simply adopt. How do I specify how well the DPLL's will perform -- in absolute and relative terms? How do I specify how quickly a given client will nominate a new master clock -- and then come into synchronization with it? How do I specify the criteria by which a swarm of clients decide to mark a chosen master as "malfunctioning"?
I can tell you how *my* implementation behaves. But, deciding that
*it* is the Gold Standard would be presumptuous. OTOH, if there are no other documented implementations to contrast with, then...?
Exactly. How can you write code (design hardware) if you don't know what the code is SUPPOSED to do? (you're, perhaps, hoping to figure that out as you write the PRODUCTION CODE?? :> ) How can you challenge the design (i.e., create a test suite) if you don't know the extent of the "legal limits" of your challenge? ("Ah-ha! When I poured fuming nitric acid on the device, the output was NOT correct!!")
In my case, this all happens before I start on the formal spec/design.
I simply can't estimate things with which I've no prior experience (and, the things that I've already *done* are nowhere near as interesting as things I've never *tried*!).
So, either *I* invest in the research (for my own curiosity or in the hope of folding those costs into a subsequent contract) *or* you (client) pay me to do that research (T&M) knowing that you can back out if it looks like its getting too expensive.
Or, find someone who claims to already "have all the answers" (and deal with the estimate *he* gives you -- but certainly don't expect ME to use his numbers in MY estimate! :> )
With a spec "up front", someone else can start work on test scaffolding and building test cases. In doing so, they can also uncover issues that you may not have considered.
[When a client tells me they "don't care" about a behavior in a particular set of circumstances, I remind them that I am free to let the device catch fire and BURN if those circumstances exist... especially if that will save me effort! :> Folks tend to re-evaluate their opinions about those circumstances when faced with such obvious alternatives!]
This actually isn't *hard* to do. But, takes a different mindset. You have to continually ask: "What am I ASSUMING, here? Is that a valid assumption? What could *invalidate* that??"
["Can't Happen" *does*!]
Time for cookies. 35 dozens tonight if I plan to stay on schedule! :-/