Languages, is popularity dominating engineering?

But, how do they do that? You can't (indefinitely) improve a programmer's "inherent quality". And, any improvements there are slow to realize.

So, you have to rely on the tools getting better, more expressive, etc. Let a tool burn development cycles to make the developer's effort be more productive (e.g., lint *in* the IDE while you're writing the code instead of as a "post process").

[The trick, here, is not to turn the developer into a mindless idiot that expects the machine to do ALL his/her thinking!]

E.g., my bias in recent years is to make code more easily understood and more *robust* -- instead of burning clock cycles on (silly?) features, burn them ensuring the code ALWAYS works as advertised, etc. Yet, do so within the other constraints (cost, power, space, etc.) imposed on the design.

Reply to
Don Y
Loading thread data ...

I said "at least target"; I don't even see too many that seem to care. Learn to speak the (spoken) language; learn how to negotiate what you'll deliver; learn how to do really thorough design/development/testing. Overdeliver/underpromise.

I dunno; learn your craft. I've been at it for close to 30 years. I was barely even competent at ten years from a standpoint of being able to deliver stuff that pretty much works the first time - maybe bounce a bug or two before final release. I'd released stuff that worked from the git-go; what got cleaner was my ability to promise things and hit targets.

Somewhere abut 20 years ago "we" decided that this is a children's game and structured things accordingly. No; it's an *adult* game and you have to play it to win.

No. That doesn't work. the information asymmetries are huge. I suppose there's a way to school people to be better earlier but I doubt it. I think it'll just take the ten years to get to the journeyman phase.

Use of an IDE is a marque of someone who doesn't understand the real risks. That doesn't always hold.

Feel free to do that; it does not turn out well.

Features are manageable; just withhold them until they work in subsequent releases. You, of course, have to know what's really required and what isn't, but that's a big part of the game.

--
Les Cargill
Reply to
Les Cargill

But that is true of all professions! It's not just "ditch diggers" who treat "work" as "just a job". Ever have an argument with a nurse who

*claims* she didn't dispense a particular medication -- when *you* watched her administer it? ("Well, it's not on the chart..."). Or a plumber that doesn't *appear* to know how to sweat a joint? Or a painter who didn't properly prepare the surface before slopping on paint?

We tend to forget that, to most people, "work" is "just a job". What incentive do they have to perform better? Will they be rewarded? Conversely, penalized if they perform *worse*??

"The Market" doesn't want to have to rely/depend on practitioners. I've been hired for the *stated* purpose of an employer/client wanting to "not be reliant" on a particular individual currently in their employ.

I, in turn, never wanted to be "strapped" with supporting/developing the same thing over and over and over (when you're seen as "good" at something, you tend to get STUCK doing it).

Look at how society has striven to "dumb down" most labor efforts. Not to reduce errors or free employees from "tedium" but, rather, to allow less skilled (expensive) employees to fill those roles.

[When did the ability to "make change" slip out of the basic skillset of ALL consumers??]

But it *has* worked! Look at how many folks now make a living "writing code". Years ago, they would have been hard-pressed to get their (Hollerith) cards in the right order to ensure the job didn't ABEND before it got started! Now, "secretaries" write macros in spreadsheets, countless script-kiddies build web pages, etc. All because the tools have taken on more of the "work".

*My* productivity is vastly improved when I can code in a multitasking environment -- esp if the tools let me "attach" to multiple threads and watch them interacting. This would have been unheard of with the targets and development systems available when I started my career!

You're advocating that we do away with IDE's? Simulators? Lint? etc. A tool isn't inherently "bad"; it all boils down to how well you *use* it and what you expect *from* it.

I *love* being able to run my code on a desktop simulator instead of being dependent on a piece of target hardware. There's so much more I can do in pulling data from the "virtual target" to verify proper operation, visualize the data or the performance metrics of the code, etc.

It depends on the individual. When I hear people complaining that their machines are too slow, my first thought is "What are they doing that is causing those to be the apparent bottleneck?" Often, they aren't THINKING but, instead, just "trying things" and hoping one of them works. Then, when it works, forgetting all about the problem (i.e., considering it "solved") and moving on -- to "throw darts" at the *next* problem they stumble upon.

[Look at places like McDonald's; their cash registers just have *pictures* on them (or, at least, they *did*, at one point). Push the "hamburger button" twice for two hamburgers, etc. How the hell can they *ever* get an order WRONG? Yet they *do*! :-/ ]

The developer doesn't always have control over what happens, when. Manglement can declare that a new feature is required -- even though the OLD features haven't been "perfected", yet.

And, IME, developers tend to want to play with implementing new features instead of testing/documenting/perfecting old ones. There's little "novelty" in testing or documentation! And, by the time something is (sort of) working, the developer is looking for any excuse to "move on" to something else...

Reply to
Don Y

At the end of the day, it's just a job to me, too. But it'd be no fun at all if I wasn't engaged with it at this level.

Incentives don't work, ultimately.

I understand completely; how'd that work out for 'em? We've already descended into the realm of "who has the power in this relationship?"

That's easy: the boss does.

That's fine for running a prison, but it's hell for a corporation. Had I been explicitly told that up front, I'd never trust that individual again.

Then again, being head pumper on a sinking ship is no fun. So that's his choice... I sympathize completely having to depend on that one guy, but.. maybe he's doing it wrong.

I find that if you build it right, the support is pretty minimal.

That doesn't work, either. It's been quite the opportunity for me as well.

Meh. We all use a little plastic card, anyway.

Boo, cards. Very slow and inefficient.

So what's wrong with that? That is not what I am talking about anyway. "Secretaries" have *real* jobs; we get to play all day. A large dollop of respect is in order.

I'm just a necessary evil, in the end.

I don't find any of that amounts to a hill of beans. It's decoration. I've done things with .. too many threads, one "big loop", oddball CASE tools... the basics under it all are the same.

Nope. But you'd better be able to dive in outside the thing. Or do you ship "DEBUG" projects and call 'em released?

Certainly not.

Of course.

This is fine so far as it goes.

Of course. I do that same thing while I muse about the root cause. I suspect we all do. About half the time, I stumble into it.

Heh.

McDonalds has a specific corporate directive to have people and not just machines in the stores. That's the only reason they're there. "Freshly scrubbed faces" as per an interview ( might have been an article ) with Ray Kroc.

They might as well. It ain't done 'til it's done. Her's the URL of the current defect list, and I'll send you an email every time a new one pops up...

I've never had a lick of trouble with negotiating what goes into a release. All "manglement" wants is documentary evidence of improvement. If you learn to estimate the cost of not-fixing something, you'll have better luck with this. And it helps to have non-adversarial relationship.

So they should learn to be cost-driven. Every feature you *DON'T* add saves countless dollars in all directions.

And if documentation hurts, you're doing it wrong. Remember that simulator you wrote? There ya go...

"Novelty" is that which I should think we'd like to *avoid*. Nice, boring defect free stuff - that's the ticket.

Eventually that converges on not being a developer any more, in my experience. Narrow is the path...

--
Les Cargill
Reply to
Les Cargill

Steady on Don. You are starting to sound like you are advocating that all software should be "correct by construction". ;>

Actually, "correct by construction" is a very laudable aim for all software developers. However, you cannot truly state that what you have constructed is correct by construction if what you are building is overly complex. Hence, the need to get to the point of simplification of systems and the components that make-up those systems so that each and every one can be adequately described, documented and understood.

--
******************************************************************** 
Paul E. Bennett IEng MIET..... 
Forth based HIDECS Consultancy............. 
Mob: +44 (0)7811-639972 
Tel: +44 (0)1235-510979 
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk.. 
********************************************************************
Reply to
Paul E Bennett

In a typically designed database, that query wouldn't be made any more efficient by only targeting females. Such an optimization would require that male and female patients be separate to begin with and there's generally no good reason to do that.

A KB built from correlations found in the data might have some utility in optimizing ad hoc queries, but ad hoc queries are atypical in most settings.

George

Reply to
George Neuner

Specification drives design and testing. Of course, no guarantee that the spec is correct for the problem at hand (that's the first part of the puzzle -- get it wrong and it doesn't matter how "perfect" your solution happens to be -- you've solved the wrong problem!)

What I use "technological (runtime) advances" for is to choose cleaner algorithms, more fleshed out data constructs, etc. So "what I'm doing" is more apparent to the next guy to look at my code -- without my having to explain some "trick" in the algorithm (I may still have to explain the *algorithm* but not some twist that he/she is likely to misunderstand (or, worse, *think* they understand and break in their efforts to make changes).

Similarly, I'll add black boxes to the run-time to give me some instrumentation that remains *in* the application (and, thus, does not alter its operation) that can enhance debugging (often, disguising them as "old state" so the algorithm can exploit their content as well).

For *compile* time exploits, I litter my code with invariants (no runtime cost) so the next pair of eyes (which may be my own!) know what the safe assumptions are at each such place in the code (instead of just moving them up to check input parameters). Also, I build compile-time tools that help ensure code and documentation remain in lock step. (e.g., I extract details from publications that I've prepared to describe the algorithms and #include those directly into the source -- so you change the documentation to get the source updated!)

Most of these things add some performance penalty (longer build times, slower run-times, etc.) but that gets hidden in the silicon improvements.

That's why it is important to make things in small pieces. "Complex == something that you can't fit in your head" So, "one page" functions; small modules with well defined functionality/interfaces; etc.

E.g., if you look at how an individual op-code executes (i.e., alters the current state of the processor), there is a lot of detail, there (instruction fetch, decode, actuation). But, it fits within the above definition of "not *too* complex". A HLL statement might resolve to many of those *different* op-codes being executed to move the machine state from "before the HLL statement" to "after the HLL statement". But, we can still manage this complexity because we can "skip over" the complexity that is embodied in *each* opcode and, instead, concentrate on the abstractions they each represent. So, we apply our wetware to manage multiple opcode instances instead of all the mechanism they employ. Etc. for a function, then module, then program, then application.

Reply to
Don Y

Ah, in my case, it's an "avocation" for which I get paid! :> Moving into a consultant's role gave me the freedom to explore different projects/application domains (instead of getting stuck doing the same thing over and over -- same market, same types of products, etc.). Retirement is a chance to address the projects that *I* originate (instead of worrying about making money for someone else!)

Yup. Especially the obvious one (money). You really want to find people who are self-motivated, enjoy what they do, etc. "What projects have you done OUTSIDE of work?" (playing video games isn't a "project"!)

Some level of paranoia/self-protection is healthy. But, when parties start flexing their muscle (i.e., employee putting a gun to employer's head to get more money), then the relationship is already soured.

That's not always the case. Key employee leaves and there can be serious financial repercussions ("Who's your backup? What do we do if you get hit by a truck??")

I can only surmise what transpired prior to my arrival -- based on observations of the personalities involved in the time that followed.

I worked with a guy many years ago who (apparently) went looking for a new job every year "on the sly". But, always went to the same firms that he *knew* would leak his search activities back to my employer. According to my boss, he once fielded a call from one of these firms to the effect of: "Is XXXX really unhappy, there? Or, is he just holding you up, again?"

I wonder if he'd be embarassed if he knew folks were saying things like that about him?

If you have leverage over the folks who "want changes", that can be so. But, if (e.g.) Marketing comes in every other week with some new idea ("requirement"), all bets are off.

At one firm, I was charged with coming up with a design for a newer version of a product they'd been "nursing" for more than a decade. I had to pitch my proposal to damn near everyone: top management, ALL of engineering, marketing, etc. (to my knowledge, this had never been done there -- before or *since*!)

Almost immediately, the Marketing folks started in with their "Oh, you HAVE to have *this* feature!" -- citing something that their old device had but that I had elided from the new device's specification. They were NOT happy when I replied, "You sold exactly ONE system with that capability. I know because prior to preparing my proposal, I examined EVERY sales order for the past 10+ years!"

The room went quiet until the CEO looked at me and said, "You know, I bet I know *who* bought it -- and it's probably sitting on a SHELF (not in use)".

Had I *not* "done my homework", I'd have been bullied into adding a useless feature at some recurring and nonrecurring cost.

?? That's something that you do "in your head" -- like memorizing multiplication tables!

Tools (technology) have advanced so that more people can do the things that would previously have required "specialized skills". And, so those "things" can be applied more pervasively.

"Secretaries" aren't carrying decks of cards around to "balance the books" but, instead, are writing macros (or, using visual tools to do same) to do it "live".

The point is, these techniques were impractical years ago. Writing a

*debugger* was a significant effort (e.g., being able to peek and poke memory, single-step a program, etc.) and had to be done for each processor. Now, you have bloated debuggers and simulators that can easily be retargeted to different processors/environments.

In my current environment, I have to attach to multiple threads/processes running on different, geographically distant, physical processors to watch a client's request pass through an agency and ultimately to a service. Doing *that* with even an ADVANCED debugger would have been tedious not long ago!

You can't ship DEBUG binaries. All dead code has to be removed prior to shipment. There are typically *many* aspects of a device that can't be examined or tested without the development scaffolding in place. The advantage of better tools (languages, debuggers, IDE's, etc) is that it allows far more thorough testing/stressing *before* you get to RELEASE.

The first product I worked on was debugged with "'scope loops" and paper printouts. No emulators/debuggers/simulators. No HLL's. It was *painful*. I suspect I could replace the three or four man-years we spent on just the *software* with a couple of weeks/months of effort, today (esp if I could take advantage of newer hardware so the newer *tools* were more effective).

It can go a *long* way! This is a direct carryover from the way I design hardware (logic): e.g., synchronous designs are much easier to "get right" than anything asynchronous. And, if you do a worst case analysis of all signal paths, all you have to do is verify operation at DC -- then crank the clock up to the target frequency.

The same sort of approach can be used in software. Isolate the hardware and time specific aspects of the code. Verify *they* work correctly (with fleshed out test suites -- something that is SO much easier to do with the tools available, now!). Then, KNOWING these work, you can add in the hardware and temporal aspects of the solution (which you have deliberately minimized -- to make this easier *and* more portable!)

There is a difference between "stumbling on" the problem -- and then

*exploring* it -- and "OK, that works... on to the next bug..."

Years ago, I was involved on a subcontract for a MIL project. Primary contractor had designed the kit. Our job was to build it and test it (one-of-a-kind sort of thing).

A minicomputer was used to drive the test suite -- pushing data into the DUT and exercising all data and control paths, indirectly. The comms link between the minicomputer (TTL/LSI) and the DUT (ECL) was a horrible kludge of one-shots, level translators, line drivers, etc.

It wasn't working. I suspected a one-shot was firing too quickly. Contractor's rep ruled that out -- by examining schematics. After patiently "deferring to my elders" and getting *nowhere* ("these are hours of my *life*!"), I grabbed a random cap off the nearest bench, tacked it onto the pins of the one-shot that I suspected and reinitiated communication.

"Huh?? What did you *do*??"

When he saw the size of the cap I used, his criticism turned away from "that's not the problem" to more of "that's *way* too big!"

"Sure! But now we KNOW where the problem lies and can figure out why your design is wrong!"

It's done when it gets *shipped*. Management always fall back on the "Shoot the Engineer" approach. "We don't have time to do it RIGHT; but, we'll have time to do it OVER!"

[One place I worked shipped a large system IN PIECES (as in, not yet completely manufactured!) just so they could get it "on the books" before year end. Of course, shortly after the New Year, it came back with some really angry words from the customer. Of course, the CEO had moved up the ladder -- based on his "record year"! -- so the mess fell on those left behind to pick up HIS pieces!]

This doesn't matter. See above. You are assuming people are rational. Put a megadollar on the books this year -- and let it come *off* the books NEXT year -- makes perfect sense to someone who's sole interest is his *promotion*!

Some projects *avoid* adding things to a release for fear of it NOT working. One client told me that his bean counters had concluded that it cost $600 to put a technician in a car and have him drive the 30 miles to "town" to make a repair. Product we were designing at the time had a DM+DL in the *$300* ballpark. You don't get to make many "mistakes" at those rates!

I.e., you don't skimp on component quality. You design so that you can drop-ship a replacement *product* instead of dispatching a technician. You test every feature to be sure it ALWAYS works. You don't indulge in feeping creaturism if there's no obvious value. OTOH, failing to add a feature that is necessary can cost a sale -- or a reputation!

They "should" learn lots of things: how to write specifications; how to design *to* specifications; how to test to specifications; how not to introduce bugs; etc.

*My* -- or *your* -- saying these things doesn't make them so!

IME, developers *don't* want to spend time writing specs (how often do you see someone sit down and start writing *code* as soon as they're given a new project? *Allegedly* just to "explore some algorithms"? how often do they *then* write the specifications having discarded the code they were "playing with"?). They don't like documenting their code. They don't like building test suites and applying them rigorously throughout the development effort (instead, they "poke at" their code just enough to convince themselves that it APPEARS to work).

It's *so* much more interesting to move on to some other aspect of the design than to keep hammering at pathological cases that *might* come up (or, might NOT!).

I have an uncanny ability to find flaws in production code. It's easy -- figure out what they ASSUME you will do, then do something unexpected. Disheartening when they are "relieved" that they are "finally done" -- only to see me poke a hole in their work almost

*casually*!

I had a tool vendor who would grumble about how frequently I would find bugs in their products (through normal course of use). While they weren't keen on the bug reports, they *were* happy to have a more robust product as a result.

Engineers tend to be more interested in "solving problems" than "describing what they did". I'm almost obsessed with documentation -- yet, each time I bake an Rx, I don't formally *revise* it to reflect the improvements I've introduced with this latest incarnation. Instead, I leave a cryptic note to myself. "Next time", I'll carefully examine every square inch of the page to figure out which group of notes are most recent and "update" the Rx "in my head".

Should I, instead, keep a laptop in the kitchen just to "do it *right*"? (At least I *made* notations as to the impact of each change instead of relying on memory for that!)

Elsewhere, you called it "fun". I guess we have different ideas of "fun". I don't consider "boring" and "fun' to be synonomous. Sounds more like a *job*!

Look around at the (older) folks who started off their careers in engineering:

Some move into Management (money, unable/unwilling to keep up with technological advances, perceived prestige, etc.).

Some move into their own ventures (consultancies, businesses, etc.).

Some keep doing the same thing forever (every place I've worked has had at least one "old-timer" who is helplessly out of date with current technology -- hopefully, not in a position where he keeps the company's feet firmly planted in The Past).

Some keep performing at "subsistence level" and are retained solely out of inertia ("He's harmless").

Others become "idea people" -- keeping just enough abreast of technology to know what *should* be feasible, but not really competent to do the actual work.

etc.

There's a very different mindset involved in wanting to "get something (done) *right*" vs. "just move on".

Time to assemble my first set of cookie platters and get them out of here (so I can get on with the rest of my baking!)

Reply to
Don Y

Dunno. I suspect this was just an easy example for him to use to explain the concept -- one that virtually everyone could "understand".

I'm not sure what his goal/methodology was. If he was trying to "learn" on-the-fly or if this was part of some more fundamental aspect of the design. (this was almost 40 years ago and not something I was

*interested* in, at the time).

OTOH, it is this type of knowledge that a programmer can imbibe in his algorithm that a compiler can't (necessarily) infer from an examination of sources (that *don't* contain these relationships).

Things like:

uint foo;

if (foo >= 0) ...

are relatively lame, by comparison. (yet amusing to seee how often developers write things like that!)

Reply to
Don Y

Never had any trouble with that. You have to frame issues in terms of risk, cost and capability.

Well, there ya go.

Nothing wrong with that.

The point being that you have a release process.

It can so long as you can get buy-in on the NRE for it.

:)

They are if you let 'em be rational. This is my point.

In that case, that *IS* rational. But in general, I've managed to work with people who had the same basic interest-alignment I had.

Yes.

Nope.

Yep.

The point of that is that it is manageable and the way to manage it is by balancing cost and risk. If a feature simply *HAS* to be there, then it's gonna need to be there.

Yep.

Of course it's a job. But that job is less fun when you're firefighting all the time.

Most companies have their feet firmly planted in the past - and for good reason. The "can't keep up" thing is always suspicious; I've never seen it in thirty years. Generally, new technology means a new project and those are, frankly, unusual.

If you wish to introduce new tech, you're better off bringing it in as a fait accompli.

It's harder to get it right if you're having to "perform" at the same time. "Performers" play to the audience.

--
Les Cargill
Reply to
Les Cargill

The effort it takes to create a new piece of software and that to read and understand what it does are very different. I am quite sure even someone totally unfamiliar with VPA would find it easier to read and understand what I have written than a poorly commented C source where C might be his native language (and practically all of the C sources I have seen are poorly commented). English has evolved for centuries, it is a good language to express ideas. Has yet to be beaten really. Getting into the subtleties of how to use the tool chain is another task and it takes more or less the same effort anyway, whichever tool. Getting familiar with the programming language itself (i.e. the non-comment part) will be necessary only if someone wants to write some new code in that language; making some changes etc. to something existing does not require getting really good at the language. Then the simpler it is - i.e. the lower the level - the easier it will be to grasp what and how to do.

Yes, which is why I opted for tools under my control. Nothing can match the efficiency you get by that. Nothing comes close really.

Understand - yes, rewrite it - no, why would this be needed. If the code they read is too old or has to be rewritten for some other reason there is no binding to any language, they can write it in whatever they opt for then. I have had to rewrite (or wanted to rewrite) very little of my few tens of megabytes of source I have written over the past 20 years.

But choosing a high level language only because one hopes it will survive the next 2-3 decades so someone would find it easier to make some minor modifications is just silly (and done all the time), why would I restrain myself now and do 1/10th or less of what I can do in my lifetime only trying to save a few days work for someone a few decades later.

Oh I agree 100% that high level languages are more convenient than machine language for expressions. However expressions take < 1% of the code we write; and there is nothing stopping you from making a call to evaluate an expression from within practically any language.

But "most people" obviously would prefer something like Basic or sort of where one can put together some arithmetic and learn what he needs from the language within a day or so, of course.

The advantages of being good at a language begin to show up when you have to use it on a single task/project at least for a few months. This is when the (too) high level only gets in the way. With VPA you control the level at which you write yourself by defining the various levels, calls, objects etc. etc. Then in my case, having written the entire environment, you could argue I use a much higher level than a HLL of course :-). But the point is I do have any level I want at any line of code I write, which is only achievable if you have the lowest level available - and if you maintain your fitness at being good at using it.

Well yes, though I am not sure how much value this will add to the plain method of just putting some links/paths as text in the source. Will make things better readable at first glance of course but if someone will work on these sources the first glance is nothing I would be overly concerned with, what counts is that the information is there to be found.

Dimiter

------------------------------------------------------ Dimiter Popoff, TGI

formatting link

------------------------------------------------------

formatting link

Reply to
Dimiter_Popoff

I've worked for groups driven by Marketing, Engineering and Manufacturing. Each has its own "perversion" of "Rationality" wrt cost/benefit analysis.

If *you* receive the BENEFIT (e.g., increased sales to make your commissions/bonus better) and someone *else* bears the *cost* (or a disproportionate amount of that cost), then you've little to lose by asking for The Moon (unless your engineering staff are not up to the task and deliver a crappy "feature-rich" product).

Engineering-driven groups are probably the most "fun" (from a novelty perspective). But, can get caught up in their own cleverness -- doing something to prove it can be *done*, instead of because there is a market/need for it. They also tend to be the least aware of the end consumers' particular needs and capabilities (the pejorative: Designed by an Engineer).

Manufacturing-driven firms tend to be the most conservative. They tend to see everything in terms of all the "capital" invested in men and machines: "Why can't we keep doing what we've BEEN doing?"

Everyone sees the benefits and the costs from their own perspective. Worst scenario is incurring the costs with no benefit. Best is benefit with no *cost*. Reality lies somewhere between the extremes.

Marketing/Sales have a tangible reward in place (commissions, bonuses, etc.) so they want a product that they can sell to *ANY* set of requirements. What is the associated benefit to Engineering (and individual engineers) to undertake that (aside from success or failure of the business as a whole)? When was the last time you saw a bonus comparable to the one the Sales Guy picked up... for a product that *you* implemented??

[Of course, individual personalities can bias each of these environs...]

But this was the *exception*, not the rule! Typically, a design review just looks for flaws in a proposed *implementation*. Where is the OBJECTIVE analysis of *needs*? I.e., it falls on some individual's shoulders to take on that responsibility (as a consultant, that was

*my* shoulders, invariably). [Not complaining. Just pointing out that clients typically hadn't done this sort of analysis *either*. So, it's not just "employers" who are the problem]

Anyone building any product has a release process. Even "software publishers". You can't get from Engineer's Desk to The Market without

*some* mechanism in place. How disciplined and structured and accountable that process is can vary significantly.

As above, it boils down to who defines "rational". If *I* win and you lose, is that a rational or irrational choice?

Again, who defines "*has*"? Marketing tells you they *have to* have this particular feature (my previous story). Do you just take them at their word? Do you *challenge* them (as I did) and piss them off? Setting the stage for "Well, we lost that sale because Don convinced you NOT to include the feature that I *told* you we needed!" -- how do you disprove that allegation? Go *around* the Marketing folks and contact the customer directly? Gee, I wonder how *that* is going to be received when Marketing gets wind of it (from the customer!).

I've been lucky enough to have worked in several fields/markets doing leading (not bleeding) edge work. I can't speak to the *financial* successes of each of those firms but, from an Engineering perspective, the projects were exciting and "different" from other things happening in the market.

But, those tended to be Engineering-driven firms. Where the people making the final decisions knew what *was* feasible or *would be* feasible -- and just had to drag the other folks along for the ride.

Last platter of cookies to get out of here and I can take a breather for half a day... :-/

Reply to
Don Y
[%X]

One of the reasons why I suggest that Requirements Specifications should be the first item in the System Testing regime. Until you have Requirements Specifications that are fully testable and tested you have not got clear and unequivocal Requirements Specifications where each feature has been proven to be needed (and not just a whimsical fancy).

--
******************************************************************** 
Paul E. Bennett IEng MIET..... 
Forth based HIDECS Consultancy............. 
Mob: +44 (0)7811-639972 
Tel: +44 (0)1235-510979 
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk.. 
********************************************************************
Reply to
Paul E Bennett

But it's still a value judgement by "someone". What's "whimsy" and what's "necessary"?

E.g., one of my first products was a LORAN-C position plotter. It gave you a printed record of your "trip" (typically, on a boat).

One of the "requirements" from Marketing was a provision to "mark" the chart on demand. The rationale being it could be used to note the positions of lobster pots as you toss them overboard -- a long "pushbutton on a cord" so you can be "aft" and heave the pots overboard without having to keep yelling back to someone in the wheelhouse (where the plotter can be sheltered from the elements, fish glop, etc.)

In its simplest form, this was: lift pen move relative (-width/2, -height/2) drop pen move relative ( width, height ) lift pen move relative ( -width, 0 ) drop pen move relative ( width, -height ) lift pen move relative (-width/2, height/2) drop pen

Owing to the simplistic nature of the implementation, you could actually do this as an ISR, of sorts (even though it took a sizeable fraction of a second -- you just pause the normal motor handling code)

Of course, if the plotter's scale is too high, these fixed size X's will effectively overlap each other. Should they be scaled to reflect the current plotting scale?

A marine research group might want to use the facility to track the progress of a pod of whales, school of dolphins, etc.

How do you (later, examining the hardcopy plot), figure out which order you dropped the pots? (or, anything else you may have used this marking facility for) Should they be labeled with numbers? This suggests there may need to be a limit to the number of such labels (1 digit? 2 digits? 5 digits??) (you could similarly "draw" properly spaced digits by augmenting the "ISR" above, right?)

What happens if the button is pressed more than once in the time it takes to draw a single X?

What if the pen is up against one or more "limits" when this is activated?

[There are lots of other "features" that I could put on a similar list...]

Where do you draw the line? *Who* draws the line -- the Project Manager, Engineering, Marketing, The Engineer writing the code?

What's the cost of the feature? A flag to tell the motor handler to pause and, instead, invoke the "draw X" fixed routine (assuming it is not scaled or labeled in any way), a digital input and appropriate signal conditioning to prevent the button from being an excellent *antenna* effectively coupling the ship-to-shore radio into the CPU, and someplace to poll and debounce the button (often enough to ensure it isn't "missed") -- and, something to enable/disable this behavior (do you want to be able to make X's any time you press the button -- even if the plotter isn't actively plotting position??)

[That's the *minimum* cost for a hopefully "free feature"]

Keep in mind that we spent considerable time on this project "removing *bytes* (not KB) of ROM" to make things fit in the space available (because each new "free feature" kept nibbling at our meager hardware resources)

Reply to
Don Y

Was that a modification of an existing product?

Testing the requirements would have involved asking a great many questions that would have resolved all of those issues. It might even have highlighted other possible engineering directions. You covered some questions which I hope you fired back at Marketing and obtained sensible answers.

I know you don't deal with the high levels of Mission Criticality that I do, but engineers need to ask all sorts of questions about the requirements they are handed in order to ensure the best outcomes.

I sometimes find that a Task Analysis with role-play within a Requirements Review can help to flesh out the whimsical notions from the real requirements.

--
******************************************************************** 
Paul E. Bennett IEng MIET..... 
Forth based HIDECS Consultancy............. 
Mob: +44 (0)7811-639972 
Tel: +44 (0)1235-510979 
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk.. 
********************************************************************
Reply to
Paul E Bennett

Worse -- a product *going* into release!

As "engineers", we were isolated from Marketing. A potential "rationalization" for this sort of thinking was the period: late

70's. Processors were *just* seeing use in commercial/consumer products. Part of the way we (engineers, in general -- not those of us at this firm) pitched the technology was that it was so much more flexible than "hardware" solutions (adding this sort of feature to a hardware-based product would have required adding something equivalent to a flying daughter-card with patches to foils in the "original" board -- or, a new layout/design).

I would estimate *weekly* we'd get some change request (never written!) followed by a comment to the effect of: "That should be easy, right? Just change a *bit* somewhere..."

The problem lies with where the power in the relationship lies. What recourse do you have if you're *told* to do something that you know is "wrong" (will add significantly to cost, schedule, unreliability, etc.)?

The power balance shifts when you move to a consultancy -- you can fall back on the language of your contract to decline a change ("We'll handle that later") *or* simply "no bid" the job.

I find it difficult to get "customers" (Marketing, clients, etc.) to focus on what they *want*, let alone *why* they want it. They tend to know what they *don't* want -- AFTER they see it! But, are largely incapable of abstract thought: "Imagine this device reified before you; how does it work?"

This seemed relatively consistent in my experiences in different application domains/market segments. E.g., medical devices, process control, navigation, consumer goods, etc. Anything that had to interact with people in some manner (people being "variables") caused this "fuzziness" in requirements.

I suspect this is one of the reasons why engineers are responsible for so many (clumsy) designs -- "You've got to be an Engineer to use this damn thing!"

Reply to
Don Y

On Saturday, December 13, 2014 4:43:51 PM UTC-5, snipped-for-privacy@downunder.com wrote: []

Some businesses I've seen do not consider that problem. Management can be VERY short sighted.

From the Engineering side, I think that where available special languages are appropriate. A simple example is SQL.

Well, it depends. Are the requirements and designs well documented?

FORTRAN is a functional language and should be fairly easy for a good programmer to learn. I'd rather hire a programmer that still wants to learn new things than one that picked up a set of languages in school and will not look outside that toolbox.

Lastly are the build tools still available (compiler/linker)? Is the OS still available?

This actually is a fairly simple Engineering style decision, Weigh the trade offs given the facts for the specific case.

I think maintenance becomes harder over time for any system. Bug fixes and new features are added until the code collapses if some forward planning is not done.

Again it comes to the type of person you hire to maintain the code. If he/she is willing to learn, it may be easier. Consider a specific application with a version written in C and a version written in an exotic language that is tailored for the problem domain. You have Much more code to maintain in C since it must implement the features of that exotic language and then implement the application. Expressing the application in that exotic language can provide a clearer understanding of the problem being solved. So it only comes down to how hard is it to learn that exotic language. And I see your point, that it may be harder to hire that type of programmer willing to learn, but they are out there.

(actually this is one of the strengths that the FORTH guys chime about)

There is a hidden management issue here. When management prefers to pay low wages for just a competent programmer, rather than paying a better wage for a good programmer. There is also the issue of getting past HR. HR likes to filter resumes on simple checklists. Willing to learn is seldom one of those items.

So yes there are a lot of hurtles to getting that maintenance programmer. But it is Management that puts those hurtles there, not the language.

Ed

Reply to
Ed Prochak

I believe you are right in part.

The first engineers get wrong is not to insist on a detailed set of product requirements.

The second is for engineers not to cost out how much an addition, or a

marketing and others get to see the true cost of such unplanned and uncoordinated changes.

To be fair, if you're first to market it's not always known what the product features should be.

--
Mike Perkins 
Video Solutions Ltd 
www.videosolutions.ltd.uk
Reply to
Mike Perkins

Sure! Reading *existing* software you have the benefit of knowing "that" it works (or how it doesn't).

But how much of your claim is based on *your* level/style of commentary?

I've seen "heavily commented" pieces of code where the comments were incorrect (almost WORSE than having none at all) or inadequate (e.g., "add one", "divide by time", etc.). And, cases where there were exactly

*zero* comments!

OTOH, I've seen pieces of code that are delightfully well documented.

When I wrote my 9-track tape driver, I prefaced the first line of code with several *pages* of commentary. Largely to explain the (archaic?) terminology applicable to such subsystems as well as the capabilities present in each of the components (controller, formatter, transport, etc.). Otherwise, someone reading the code might not understand why I was filling a buffer "backwards" (read reverse) or rewinding one transport *while* writing to another, etc.

The language doesn't dictate the quality of the commentary (unless it has no provisions for inserting comments amongst code!) but, rather, the individual creating it.

It also gives rise to lots of ambiguity! E.g., Les's comment, up-thread: - Only use "for" loops for integer-index loops. I can read as: - Don't use anything other than "for" loops for integer-index loops - Don't use "for" loops for anything other than integer-index loops

If a newscaster claims "the suspect was shot to death" (a common phrase on the News), how many shots were fired? Did someone stand over him and keep shooting UNTIL DEAD? Or, was a *single* shot fired that killed him ("shot dead")?

Perhaps the only ones who *try* to be precise with language are lawyers -- because meaning is the essence of their work (and we still see them argue about details of contracts in courts, etc.)

I don't agree. Often there are subtleties in a language that have a pronounced effect on how code works. E.g., "call by value" vs. "call by reference" semantics need not be explicitly differentiable in the language's syntax. You'd have to *know* how particular parameters are passed. E.g., in Limbo, strings are passed by value -- changing the string in a function has no effect on the original string! OTOH,

*lists* and arrays (! i.e., a string is not a char array) are passed by reference.

Hidden constructors and anonymous, temporary objects.

Many of the legacy text-to-phoneme algorithms express patterns using a wildcard style syntax: "one or more vowels", "a front vowel", "a voiced consonant", etc. One of the earliest "well known" rulesets (NRL) was written in SNOBOL. So, "one or more vowels" is actually implemented using a *lazy* matching algorithm ("ARBNO"). I.e., if we use '@' to represent one or more vowels, then: @ matches A, AA, AEO, OIUA, etc. @bC matches AbC, AAbC, AEObC, OIUAbC, etc. @Ab matches AAb Most folks implementing this in *C* use GREEDY matches -- and never "back out" on failure. So, that last example fails -- the second 'A' (in AAb) gets sucked into the '@' causing the literal 'A' in the template (@Ab) to not agree with the input string.

I.e., people conversant in one language READ behaviors INTO new languages that aren't necessarily there!

I suspect most people (employers) don't want to take on the burden and cost of having to develop and maintain their own toolchains. Firms that invest in their own tools have to implicitly assume the need to train all new hires in the use of those tools (in addition to their existing product software base -- libraries, etc.).

E.g., any time I create an ASL, I do so *only* as an expedient to "lots of (typo-prone) typing". It allows the essence of what I am saying to precipitate out of the text without all the syntactic cluttering that the native language imposes. (e.g., my state machine example, elsewhere)

But *you* are the sole developer and have control over the product in its entirety. What happens when you opt to bring someone on-board to assist? Or, when you are no longer capable (interested?) in maintaining existing/new products? Are your customers left with a dead-end product (because your codebase has limited value to someone wanting to expand upon it -- other than a competitor who simply wants to

*kill* it off)?

Because other people (organizations) have other interests above and beyond those of an individual! E.g., if your work was *for* some other business (i.e., they *own* your output), they would want to bind your services to them "indefinitely" -- and, if wise, take steps to ensure they had a "hot backup" for you (and a way to bind that individual's services as well!).

Or, they can opt for something more universally used and avail themselves of more potential candidates -- losing *you* would be an inconvenience, but not a death knell for their product or their organization!

Most of what I'm currently doing will be released as open source. As such, *I* won't be the one looking at it, later. The less "main stream" the tools I choose, the higher the bar for others to embrace my efforts and build on them. Since I am not "expert" in many of the technologies that I am using, others need to be able to *easily* step in and replace entire subsystems as the appropriate technology advances, is refined, etc. They can benefit from the structure I have imposed on things and focus on a particular aspect instead of having to reinvent the wheel, cart and *horse*!

There's nothing to stop me from writing my code in HEX, either! We add abstraction to improve productivity, readability, reduce ambiguity, etc. There is *one* way to parse: a + b * c % d / e - f yet we parenthesize to enforce a *particular* way of parsing it. I could express all of these operations as function calls -- but that would be even less clear to a reader; undoubtedly, he would read through the code and "rewrite" the operations being performed in this sort of notation.

Because he is more familiar with it!

My gesture recognizer uses fixed binary point (Q) arithmetic. But, as it is written in C (and not C++), I can't overload arithmetic operators to make what I am doing more obvious. The code is littered with calls to "add()", "sub()", "mul()", etc. Even constants are "obfuscated" because they have to be converted into the corresponding values in that representation. As a result, much of my commentary is a rewrite of the code using more conventional notation! This is rife for error if small changes are made and not reflected in every applicable comment.

How do you express those links? URN's? What if the resource isn't accessible at the time?

Would you be willing to move your commentary into another document?

The beauty of having everything integrated into a single document is that it's *with* the sources. Just as you scroll up to re-read a description of the block of code you are examining, you could look up and see an illustration of the data structure that the code is manipulating. Or, examine a graph of the role of a particular parameter in a particular algorithm.

People lose manuals for items they purchase, all the time. I've had clients approach me because they'd misplaced the *sources* to the products on which their livelihood depended! (I turned down another such request just a few months ago -- I have no desire to reverse engineer yet another project! There's very little to LEARN with that sort of task :< )

As I *can't* embed everything pertinent to the code *in* the code, I've had to take extra measures to ensure it is in a form that is reasonably portable and *appears* "worth preserving" (e.g., scribbles on a napkin don't qualify! :> )

Unfortunately, the wide range of media formats that could potentially contain information worth preserving makes damn near every "container" impractical. So far, the best compromise is PDF's as containers (hoping they continue to evolve to support even more objects!) with the code as "attachments".

As you're closer to the holiday (geographically), best wishes to you and L! Keep warm (together?? ;-)

Reply to
Don Y

"Insist?" How do you do that -- stomp your feet and threaten to hold your breath until you turn blue? :> Who defines "detailed"? I've seen LENGTHY requirements documents that didn't "define" anything! If you start questioning things, you push those folks WHO DON'T KNOW WHAT THEY WANT into admitting that. Or, worse, fearing that they LOOK INCOMPETENT!

IME, that doesn't work either. *You* don't have the decision making authority. And, those that do will make arbitrary decisions, *appearing* to acknowledge your data -- then complaining later when those costs actually

*do* materialize!

I prepared a detailed estimate for an employer some years ago. From that, prepared a detailed timeline (week by week). I submitted it at the start of the project. My employer cut it IN HALF when pricing the project. Then, many months later, started hounding me due to my lack of progress.

I retrieved my initial schedule from my desk drawer and showed him how I was

*exactly* on target: "This is week X, you can see I am working on FOO... as indicated in the schedule!" (If you don't trust my abilities, then why did you hire me?)

You might not be able to "know", with certainty, but you can "pretend" the device exists and *imagine* using it. Not just a "cursory" examination but a full fledged "let's make a PROJECT out of imagining this product exists".

E.g., there was an early "pocket organizer" that used a non-qwerty, non-Dvorak keyboard layout: instead, the keys were arranged in alphabetical order. Even a tiny amount of "play acting" would have shown that this was a bad choice. Anyone used to a "real" keyboard would be frustrated by it. And, folks who had NEVER experienced a real keyboard would be no better off searching for a particular letter (because it wasn't a single linear arrangement of keys -- 'J' might be right *below* 'A' instead of nine keys to the right.

The LORAN plotter I mentioned (here?) used a membrane keypad (new at that time). *But*, the keypad was INCREDIBLY stiff! I commented about this as it was noticeably difficult for me to hammer away at the buttons as I tested the device. My boss's reply: "*You* aren't our intended user. Rather, we're dealing with fishermen with fish guts on their hands and hammers for fists... we're more worried about the structural strength of the case holding up to this sort of pounding!"

I recall being shown the prototype of an early "electronic tape rule". A small LCD display in the top of the case to indicate the current measurement. And, a little button that *flipped* the digits upside down (think left-handed vs. right-handed use). I.e., someone decided that an electronic version of this tool that had been in common use for DECADES *needed* the ability to read the scale regardless of orientation despite the fact that most (*ALL* that I've seen or owned!) can only be read in *one* orientation!

Reply to
Don Y

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.