"Design investments" for the Future

Hi,

While this may sound like a "musing", it obviously affects each of us (and our successors) to varying degrees...

I'm interested in *opinions* regarding "what can be done" to improve the sorts of "systems" designed here (RT & embedded).

I'm deliberately vague as to what I mean by "improve"...

- better education

- faster processors

- better languages

- auto-verification methodologies

- magical new technology etc.

Granted, the sorts of products/apps that are created run the gamut from "three-day weekend" projects to massive distributed systems. And, have product lifespans from "many months" to "many decades".

The sorts of questions this should evoke are:

What sorts of things would allow time-to-market products to be developed quicker?

What sorts of things would allow wider range of applications to be addressed?

What sorts of things would allow "more correct" implementations?

What sorts of things would facilitate change over time?

etc.

Much of this depends on the market you address and the sorts of products that fit therein. But, some also depends on where you see technology "going". And, the impediments that have hindered it getting there, so far.

What could you do *today* to make *tomorrow* more "productive"?

Thx,

--don

Reply to
Don Y
Loading thread data ...

Op Mon, 28 Oct 2013 01:47:11 +0100 schreef Don Y :

While writing my reply below, I discovered that your question above is a business-type question. For commercial "systems", the market needs ultimately decide where time&energy can be spent to "improve" things and get a better market position. For non-commercial systems, what needs to be done heavily depends on the project goals.

The above is a list of potential improvement strategies. But, if you don't know what aspect of your process you want to improve, or why, and cannot measure improvement, then no strategy is adequate. What do you mean by "improve", as in what should the result be?

How much quicker? Why? At what cost? What often helps: proper requirements engineering and system engineering so that subtasks can be defined and distributed over multiple engineers.

What large corporations do to address this: they buy other companies. Otherwise: proper requirements management so that common requirements can be managed across multiple projects. Proper system engineering that delivers useful and loosely-coupled subsystems.

How do you know whether correctness is lacking? How do you know how much time&energy to spend on this problem? How much correctness is desirable? What often helps: an appropriately tuned coding standard; a sane review process that leverages automatic code checkers; code coverage checking; tool-assisted module testing, unit testing and test generation.

Nothing. Things *will* change as time progresses. ;)

You want to keep up? Find a visionary figure.

All in due time. Anyway, can you name the exact impediments of every "technology" of the past 100 years?

Lower peoples' demands for cheap "stuff", continuous entertainment and quality of life. ;)

--
(Remove the obvious prefix to reply privately.) 
Gemaakt met Opera's e-mailprogramma: http://www.opera.com/mail/
Reply to
Boudewijn Dijkstra

Hi Boudewijn,

I've taken the liberty to rearrange your comments so I can elide much of my orig> Op Mon, 28 Oct 2013 01:47:11 +0100 schreef Don Y :

I was deliberately trying to be vague -- to let folks *decide* if the "improvements" had to do with business issues or technological ones. I.e., so their answers would reflect their own personal situations, assessments, visions, etc. -- not something spoon fed to them by The Powers That Be.

Again, my point was to let respondents decide what's important in

*their* universe -- not to dictate *my* priorities and argue with them over whether they are the *right* priorities *or* whether my proposed solutions are the *best* for those priorities.

You go to work every day. What makes your goal "harder to attain" than you think it needs to be?

E.g., when I got started in this business, I could "turn the crank" exactly twice in an 8 hour shift. So, the sort of incremental development that appears commonplace, today, would mean I'd *never* finish (make *a* change -- or group of changes), wait 4 hours, decide what *next* set of changes to make (including those to fix problems in the *previous* changes).

I had to track which *bits* in each byte were "in use" at each point in the code -- and, what their intended meanings were.

Now, I can realistically view my code without having to resort to hardcopy. I can 'make world' in far less time. And, even bool's are long-aligned, etc. I spend less of my time fighting with the tools and more time on the design/algorithm.

These all appear to be "people/process" issues. Getting folks who

*can* write good specs to actually *write* those specs. Getting folks who *can* keep their fingers in different projects and having them extract the related core technologies. Getting folks to adopt good practices and enforcing them. Getting folks with "vision". etc.

I concede all of the above. But, don't see any way to make those sorts of things happen. I.e., creating the right environment for them (management style) and then *hoping* to find the right people to populate it with (at all levels).

My list mimics yours -- but from the technological side. I.e., creating *mechanisms* to allow a "decomposed" implementation (vs. a monolithic one). A structure that forces lots of "interfaces" which can be more formally defined -- and *verified*. An environment where faults can be localized to make identification easier and reduce the risk of propagation (which can cause you to go chasing the wrong rabbit down the wrong rabbit hole!)

This, IMO, makes it easier to define subtasks (the decomposed parts of the whole), formally specify their behavior, *verify* their behavior, all for their "inheritance"/sharing among projects, etc.

"Libraries" try to do this in a naive way. But, they (typically) can't be constrained in their operation. E.g., *I* could have a library "routine" mangle *your* data. And, there's very little you could do to *prove* this -- short of actually catching me red-handed! The library codifies algorithms but doesn't do anything about their *application*! You can design a perfect screwdriver but can't prevent it from being used as a *hammer*... ON FINE CHINA!

In my field, the improvements (which reveal some of the impediments):

- cost of components (e.g., $50/2KB of EPROM is unimagineable today)

- packaging (imagine what the 68000 "aircraft carrier" would look like with the pin counts on some of today's devices)

- speed (how many solutions would be impractical if we were still dealing with 10us instruction times?)

- communications (allowing physical processors to easily and efficiently "share data" and workloads)

- desktop advances (when I started, I was able to "turn the crank" exactly twice in an 8 hour day; symbolic debugger -- what's that?)

OTOH, the "people driven" issues haven't changed. It still takes a long time to write a detailed specification. It still takes a long time to thoroughly test an implementation against a specification; it's still tedious to partition a project into "low connectivity" subtasks; etc.

And, worst of all, it seems *harder* to find folks with the required focus *and* skillset. (One of my favorite pastimes is to proofread specs from clients: "So, in item 25A1c, if the user decides to reply with, 'maybe' instead of 'yes' or 'no', is it OK for the device to spontaneously explode?" "Huh? Of course not!" "Oh, sorry. I figured that anything that you had FAILED to specify could be interpreted in whatever way makes it easiest CHEAPEST for me to design the device... So, when presented with a 'maybe' response, should I interpret it as 'yes', 'no' or prompt the user to try again?")

People seem to have an infinite capacity to be entertained. Get rid of all the toys and they'll find a way to entertain themselves watching blades of grass grow! :(

As to "quality of life", I think I've discovered that "less (stuph) is more". Seems like the more you have, the more time you spend:

- pissing and moaning about when it breaks (or "disappoints")

- fixing it

- hiring someone else to fix it

- shopping to find a suitable replacement

- second-guessing your replacement purchase decision ("I wonder when *this* one will break?")

Ah, but The Joneses need to be suitably impressed, right? :>

Reply to
Don Y

Op Mon, 28 Oct 2013 18:44:16 +0100 schreef Don Y :

In any case your rearrangement reveals your interpretation of my intent. ;)

Perhaps this confirms my narrow vision, but: there are only business issues! You can't improve a design using technology that isn't ready yet. It is a business decision to choose between the pros and cons of different technologies. It is a business decision to use a new technology that's promising but risky. It doesn't make sense to make a technological choice without a business rationale.

Are you promoting workers to think for themselves and challenge the The Powers That Be? Are you trying to incite chaos?

It sounds like you are trying to assess what things folks are struggling with nowadays.

That question is separate from your original question of how to improve the designed system.

Anyway for me, it is hidden assumptions and the difficulties involving transfer of knowledge and (human) information. Then again, if my goals were easy to attain, then my position would be redundant.

Since "you can't always get what you want", there's by definition no guaranteed way to make people/process things actually happen, short of breeding your own race of skilled & obedient minions.

That's generally up to higher management and HR, I have no expertise there.

Ah, you were talking about impediments to creating cheap, small, fast, connected systems, not the impediments to the technologies themselves breaking through. Note that individual engineers who are limited to using existing tools & technologies can't "do" anything to make the above improvements happen.

People driven issues have changed immensely! Hierarchy is much less rigid now. Writing specifications no longer takes rooms full of typists. Calculating stuff no longer takes rooms full of calculators. Designing stuff no longer takes huge drawing tables. All these people in their isolated departments are a huge burden to communication, with or without telephone or e-mail. Things may take a similar amount of time, but the same amount of work is performed by a lot less people and with process-aware information systems, tracking a mistake to its source is a lot easier.

Does that really surprise you?

--
(Remove the obvious prefix to reply privately.) 
Gemaakt met Opera's e-mailprogramma: http://www.opera.com/mail/
Reply to
Boudewijn Dijkstra

I disagree.

First, you're assuming all efforts are based on some commercial outcome. This ignores hobbies, research, etc. One doesn't always do things with a "sale" in mind.

Secondly, "business" tends to have a very short horizon. Folks don't make decisions "for the long run" but, rather, "for the next

9-12 months". There's very little incentive to "invest" in a decision if you won't *see* "profitable" results in the very near term. Hard to imagine "business" deciding to build a bridge, dig a tunnel, etc. (there *are* businesses that would do this; but they aren't the norm -- esp for an industry like "ours")

Third, there are times when it may make sense to "invest" in a particular technology even if it isn't yet "ready for prime time". Then, hope you (or others) can improve that technology so that you can leverage your investment.

E.g., I am a huge fan of "table driven code", state machines, etc. Craft an algorithm that you can "drive" and then just plug in the right data to make it do what you want (isn't that what a CPU is?). For me, this extracts the essence of the "solution" embodied in an implementation from all the "language syntax" that otherwise clutters up a more traditional implementation -- all the "meaning" is embedded in the table's contents!

Likewise, much "data" is structured in nature. Files are rarely pseudo-random collections of bytes but, more often, sets of

*records* having individual members, etc.

With these observations in mind, I opted to include a full-fledged RDBMS in some of my current designs. A *very* "heavy" implementation considering I could "hand-craft" individual solutions to each of these "table" instances (in the code, in the parsers for each file type, etc.). But, it promotes the concept of "tables" and "relations" to first-class entities instead of just aspects of a particular implementation ("I'll make a const array, here, and use it in this particular way" "I'll create the file with this fprintf() and, later, read it with this complementary fscanf()...")

At some future date, I'll (hopefully) be able to upgrade the RDBMS itself for higher performance, smaller footprint, greater reliability, etc. But, I won't let its current "cost" deter me from applying that technology to my applications ("Well, maybe we can look into that for our *next* project....")

If two of us agree, one of us is "unnecessary" ;-) I want to know what people doing the real *work*, think. Not what their *bosses* think. Long ago, I learned that to know the real problems, ask the real *workers*!

Years ago, I worked at a place where there was a standing argument of "hex" vs. "octal". The octal crowd would cite how easy it was to synthesize an opcode "in your head" using the octal format (I guess they were unable to synthesize in octal and then convert octal to *hex*!). And, they took this even further -- advocating "split octal" instead of true octal (i.e., 0xFFFF = 377 377). The hex folks would complain about how "unique" this practice was, how new employees had to "unlearn" hex, more characters to type, etc.

This friendly "argument" would persist indefinitely. I just shook my head and said, "What's wrong with 'MotorOn' and 'MoveLeft'? Why should I remember some insane scheme for encoding motion commands in a byte??"

(Ah, but that would mean investing a kilobuck in a tool! Much better BUSINESS DECISION to save the money and let folks keep "hand assembling" in their heads! :< )

Yes. But only from the standpoint of "their job responsibilities". I don't care about personalities, business practices, dress codes, etc. Just things that you (they?) could do (or, wish having had done *for* them!) to "improve" their product/work-experience/etc.

E.g., as I commented about my early experiences, being able to crank out a new build *4* times a day (instead of twice) would have made a huge improvement in my productivity: "a faster development system", etc.

The goal (IMO) is that "designed system".

So, my comment about exposing more interfaces helps. If the interfaces are "simpler" (smaller APIs), then they are easier to specify completely. I.e., the Standard C Library is significantly harder to specify in complete detail (in an unambiguous fashion that doesn't leave "gotchas" for neophytes) than, for example, the string handling functions subset thereof. ("Hmmm... what happens if the arguments to strcat() overlap?")

These all affected how readily the technologies were embraced. It took some marketeer's observation that EPROMs were being used as ROMs to decide, "Hey, let's ditch the expensive, ceramic, windowed package and put the same die in a cheap plastic package and sell it for much less!" Or, replace the aircraft carrier with a PLCC... eventually BGA's, etc. to drive package costs down (for a given pin count).

When I started work, 6 of us shared a single development station. You spent a *lot* of time staring at printouts and marking them up so you could most effectively use your next "time slot". Now, I routinely have 4-6 machines doing different things at the same time -- let them wait for me instead of the other way around!

But you can advocate for different tools (isn't that part of what I'm asking?). And, *build* certain tools yourself. Before FLASH came along, we used to build static RAM modules to plug into EPROM sockets ("EPROM emulator") to save the hassle of burning and erasing EPROMs all day long. Before ICE's, we wrote debuggers that ran *in* our target code so we could examine working systems without "trace" or "breakpoint" hardware.

Yet you don't *see* this in anything other than big companies! When was the last time you saw a spec that was anything more than a "marketing feature wish-list"? I.e., take your spec, mail it to firm W on continent X to have it implemented. Then, take the resulting implementation and that same spec and mail it to firm Y on continent Z to have it tested/verified. No one gets to talk to anyone until the process is over. And, each of W and Y have high priced lawyers ready to parse every single word in your spec as they sue for payment.

("Well, you didn't *say* it had to produce a result in 5 seconds or less. We chose an implementation that was easier, for us, but comes up with the same answer -- in 4 weeks!")

No. But it goes to the issue of making "smaller mouthfuls" so its easier for these people to successfully perform their responsibilities. I.e., a lot easier for me to get someone to fully document the interface to the string functions than it would be to have that same person document the entire Standard Library (it's not just a matter of scale but, also, of *focus*).

I.e., having separate memory domains for "tasks" keeps folks honest. *I* don't have to prove you are the reason for "my" misbehavior (because you wrote all over some private data of mine). Instead, you try to step out of bounds and the system bitch-slaps *you* ("Hey, that's not *my* fault!")

Some folks shouldn't drive a stick -- they have a hard enough time dealing with steering, accelerator and brake (let alone radio, telephone, etc.!).

My thinking is to put high walls on each sandbox -- to keep the sand *in* the box and prevent it from spreading to places it shouldn't. At the same time, make it obvious what mess you might be creating when you drag a canteen of water in there with you! :>

Reply to
Don Y

Make better stuff cheaper.

The problem is the pervasiveness of the "throw away" mentality: when the cost of an item is below some [ad hoc] threshold, too many people will simply buy a new one rather than complain.

Warranty repairs eat profits - new sales make profits. The incentive is for manufacturers to cheapen products below the threshold where

80+% of purchasers may be expected to simply throw away a defective unit.

George

Reply to
George Neuner

I'd settle for better stuff at (realistically) *higher* prices!

The "bottom feeding" that seems to exist nowadays means it isn't practical to buy "quality" products -- their prices are deliberately inflated well beyond "reasonable". The "sensible middle" has disappeared (i.e., hold onto "old" items that are still in working order -- or, that can be maintained in that state!)

One of our DVD players shit the bed today. I won't waste time researching the "best" -- quality, features, value-for-money, etc. Instead, I'll wander down that aisle tomorrow when I do our weekly Costco shopping and toss one in the cart. If I pay *double*, will it last twice as long?? (doubtful)

What's particularly amusing is that *they* don't set that threshold but, rather, the *vendors* do! People tend to have no inherent idea what something *should* be "worth" (to them). Instead, they take their cues from the prices that vendors attach to products.

People's expectations have so thoroughly been manipulated that they *expect* to discard items. It's as if this gives them an "excuse" to get something new more often!

I recycled a *barrel* full of TESTED 160G and 250G disk drives today "for their metal content" (a few cents per pound!). "No one uses PATA anymore" (uh, *really*??) And SATA's less than

160G aren't even *tested* as they aren't worth the time... [Sheesh, and my first PC had a 60M drive -- which I thought was *huge* (compared to the 1.4MB floppies on the machine before that!)]
Reply to
Don Y

In my first reply I made a remark about non-commercial systems. Perhaps replace "business" with "project goals".

What's your point here? That you really don't want respondents to decide what's important in *their* universe, if their horizon isn't long enough or whatnot?

Also, I don't agree with your 9-12 months. What I see is lots of businesses that select people and tools for the coming 12-umpteen months, simply because their product cycle is so long. In fierce industries, the profitable result of "making it to the market in time" is "still being in business".

Indeed, but then the system improvement doesn't directly come from your initial design choice, but from later advances of the chosen technology.

What's wrong with the current offering of COTS Embedded RDBMS products?

Let it be noted that I am not a proponent of short-sighted decisions like that.

This was not clear to me when you started this thread. It is still not clear to me what benefit you would have from folks' answers or what benefit folks could have from providing answers.

For people like me who are not directly designing systems, but who do still benefit from systems being designed (well), that goal is separate.

Well-understood well-behaved interfaces are a key point to system design in general.

For library interfaces, it is important to distinguish between well-defined, implementation-defined and unspecified use. For code review, it helps to have all actual uses in the "well-defined" areas.

Indeed.

I've had various roles in various embedded systems project teams in variously sized companies in various industries in various countries, but I've never worked with a "spec" as you describe. Usually there is at least some talking and it is my opinion that a good amount of feedback keeps everybody happy except maybe the lawyers.

Have you ever heard of requirements impact analysis, use case analysis, system-level simulation? Especially the simulation parts marketroids can comprehend.

Also, when you have 10Mloc to write and 2 years to write it, then "smaller mouthfuls" is a dire necessity. As project size increases, you'll have to lower the bar for project members because there simply aren't enough highly skilled people.

The OpenBSD project is somewhat active in this philosophy, with e.g.

- randomization of process ID's, TCP port numbers, library load order, pointer addresses (malloc, mmap)

- stack smashing protection

- prohibit execution from memory pages that have write access (W^X)

- minimize time spent with elevated process privileges

These obviously for the sake of security, but most measures do help to expose bugs. Also, they put high emphasis on proper interface documentation. ;)

--
(Remove the obvious prefix to reply privately.) 
Gemaakt met Opera's e-mailprogramma: http://www.opera.com/mail/
Reply to
Boudewijn Dijkstra

I don't really know how much this has disappeared.

I live in the Linux world and over here, "repurposing old hardware" together with "compile once, use forever" is a common passtime. I also live in Balkans and here too "repurposing old hardware" together with "use for a decade or more" is perfectly normal.

I suppose you are refering to the iGarbage mania when you talk about this. It is true that people, especially in the West (and the East), are behaving as zombies with regards to such devices. But it is really more complicated than that.

First, people these days don't actually *BUY* iGadgets. Instead, they get them from telecoms. They sign some contract with their own blood and in return they get obligatory monthly payments, a new phone and a few other bells and whistles. But what they really want, IMO, is the contract and the telecommunication the contract enables. The phone is just a gimmick. Of the many people I have seen with smartphones over the years, I don't remember that many of them using the phones for anything more then telecommunicating (and playing games). Either with other people, the bookies or Wikipedia. In fact, I haven't seen that many people with computers use them for anything more than telecommunicating, playing games, and (rarely) writing and printing a Word document.

So, I would say that that is the need for that particular market: the mass computer market. (Note that there is at least one other computer market: the geek/nerd market, much older than the mass market and, ofcourse, saturated since the days of Commodore 64 and ZX Spectrum.) And within that need of the mass computer market, the only thing that determines actual, FUNDAMENTAL, value is the ability of a device to telecommunicate. All devices capable of that are, I belive, interchangeable and people do not differentiate between them.

So it's easy for end-user to toss one device into the trash and buy another because, as far as he/she is concerned, there is no marginal difference in the utility of two devices. AND (this is very important) THERE IS ENOUGH CHEAP CREDIT TO PAY FOR THE NEW iCRAP!! Really - there is no way to overstate that sentence. If there were no "plans" (all of them credit-fueled, ofcourse), there would be no $1 phones and the users would be *users*, not *consumers*.

Strongly disagree. See above.

IMO, all devices currently on the market have the same fundamental value: they can telecommunicate. Everything, literally everything else is there just for show and does not compute into the base equation of "worth of device".

Discarding devices is normal during the technological revolution. Since a revolution always lasts several decades, peoples expectations get aligned with what they see and do - namely, they discard the lesser device and buy a greater device. This is especially true of people that were born during the revolution and therefore have no knowlede of any other ways.

But all revolutions eventually complete. And once a technological revolution completes, the main driver of device turnover - better devices becaming available - dissapears and people revert to not discarding devices.

What we see today is not normal and exists only because our system is in a phase transition. This phase transition began in ernest during mid-19th century and will probably last for a few mode decades, maybe a century or two but eventually it too will run out of steam, die down and end.

An illustrative question: what significant advancements have been made over the last 50 years in the field of (rotary) electromotor design? How about organic chemistry? What new physical processes have been perfected since 1960. that made huge strides with what is possible to do with a chemical reactor? Sure, there have been advancements in analyzing chemical reactions, new molecules were synthesized, but I can not think of anything substantial in the physical domain that entered the market since 1960.

Same goes with computers. What is your *actuall* need for storage? I have a 500GB HDD and the only reason I lack storage space on it is because I saved hundreds of gigabytes of source code repositories on it. Outside of HD video, can you think of a legitimate reason to fill up 500GB? And with HD video, how much do you really need? 1TB? 2? 10? Why have a 10TB HDD when you can store the exact same thing on a stack of DVDs? Like you really *need* all those videos to be instantly accessible? Not to mention that DVDs are: (1) resistant to EMP, (2) significantly longer lasting, (3) redundant and (4) portable.

How about network bandwidth? How much do you really NEED and how much is just you being a d-ck? There are firms out there that offer 1Mbit/s (or more!) for a few dozen dollars (maybe not in USA or EU, but there are countries where these businesses operate). Do we really need that? I myself am more than well served by 150kbit/s yet the lowest offering on the market is 250kbit/s (I think, and am too lazy to look up unless it's important that I do).

My point of these examples is not point out the futility of progress or whatever, it is to point out that technology, *fundamentally* exists to satisfy our needs. And once it does that, the products that implement those technologies are not judged by the technology anymore. Technology is not a figure of merit anymore. Instead, it is something else, assuming the product still has figures of merit attached to it and is not just a fashion accessory.

Reply to
Aleksandar Kuktin

60? I was on my 3rd computer before I had anything so grandiose.

My first computer had dual 140KB floppies ... later, it gained an external 10MB SCSI hard drive. My second computer came with a 20MB drive that eventually was replaced by a 40.

George

Reply to
George Neuner

I said "PC", not "computer"! :> The "XT" was a joke (aside from "mass storage"). As was the "AT". I told myself I wouldn't buy a "personal computer" until it supported virtual memory (68010, etc.) So, the 386 was the first practical option for me.

My first (several) computers were all hand-built -- wire wrapped on perf-board (though I managed to snarf an Augat panel for one of them! Made it much easier to wire DRAM and know there is a "decent" power/ground available!

First machine was 4K ROM and 256 bytes of RAM -- with a hex keypad for "input" and an LED display for "output". "Mass storage" was a sheet of paper and a pencil! :>

From there, I moved up in (what *I* thought were) big multiples:

16K ROM, 1KB RAM; then 4K (static) RAM; and eventually the transition to DRAM -- a whole 16KB! Followed by 64KB and eventually 512KB.

The 512KB machine was the first that "mass storage" was practical. I had a pair of 1.4MB 8" floppies (soft-sectored) that had a solenoid lock which allowed you to prevent the door from being opened. So, I went from using the RAM as "multiple execution spaces" to "memory file system" to "disk cache" -- with the feature that I could prevent the user from ejecting the media until the cache had been written back to it.

In PC-land, I went from 60MB to a pair of 300MB (2x$600) to *ten*

4GB (10x$1000) to... (I haven't sat down to figure out how many TB of magnetic media I have presently)
Reply to
Don Y

On this side of the pond, flagrant consumerism renders most things "ancient" in 1-3 years. And, "unsupported" in virtually the same amount of time.

Sure, I can run NetBSD on my Compaq Portable 386 -- 25 years old! But, call the manufacturer (no, wait... Compaq doesn't exist as a corporation any more) and ask for *support* or a spare part and you'll be lucky if they even have *records* that such a machine ever existed! Businesses and industries that are stuck with older kit are forced to purchase equipment in the "used" market just to stay in business (new spares are unavailable!).

You're only thinking of telecom products.

Look at TV's, computers, printers, monitors, *cars*, cameras, home appliances, etc. None of these are "subsidized" products. Yet, folks have no problem replacing them instead of living

*with* them (assuming they continued to operate properly).

In the past month, I've watched hundreds (at least 400 that I know of) fully operational, Pentium 4+ class machines systematically torn down into their component (subassembly) parts and recycled as "raw materials". Motherboards (greatest value per pound), DIMMs, disk drives, power supplies, sheet metal parts -- and plastic (which has NO value). I.e., a computer is "worth" about $7-10 in recycle value (as the next guy down the line has to invest still more labor/energy to extract *his* profit -- before passing it down to the next guy).

In that same time, I imagine an equivalent number of machines were scrapped without even being *tested*! Try *selling* them for $20 and they'll be caked in dust -- before you eventually scrap them as unsaleable!

*All* machines, nowadays, have network interfaces. People differentiate based on disk size, memory, cosmetics, processor speed, weight (for laptops), etc.

Again, that ignores the non-telecom market. You buy a PC/laptop and there's no "ongoing financial relationship" with the buyer. He has to make *all* his profits from that transaction.

Of course, the *software* vendors can hope to keep their hooks in you. But, they can't *force* this on you in the same way that a telecom provider can shut off your service if you stop paying!

I suspect your refrigerator doesn't telecommunicate (yet). But, I imagine the worth of that device is probably far greater than that of your *phone*! (unless you like eating canned goods and drinking room temperature beverages -- assuming they don't spoil!).

What differentiates a $400 refrigerator from a $2000 one or a $5000 one? Energy efficiency? (you can leave the door open

24/7 on the $400 unit and still spend less overall than that $5000 unit!) "Ego"? Is a $5000 frig going to last 10 times as long as a $400?

Cars have been around for 100+ years. Yet people still "turn over" their vehicles far more often than necessary. Both of our vehicles have ~65K miles on them. One is 10 years old, the other

25+. The average "life" of a car (here) is 13 years and ~140K miles. Given the absence of heavy precipitation (rain/snow) and road salt, we should expect (?) another 10 and 25 years out of each of these??

Electric motors are surprisingly efficient! Depending on technology and horsepower, 80 - 95% is easily achieved. What goal is there to invest in better technologies and materials from the standpoint of the "device manufacturer"?

Sure! Each time I do a project, I save *everything* related to that project on magnetic media. Datasheets, manuals, specifications, source code, executables for all the tools I used to create it, correspondence, contracts, etc. Easily many GB per project.

My "technical papers archive" has probably 200G of "paper" in it. My music archive is easily 200GB (and that doesn't include any of my vinyl, yet). Images of all the CD/DVDs for purchased software easily another 300GB. Open source archives another 300+.

These are all guesstimates; what I *do* know is the "devices" archive resides on a 1TB NAS. Manuals for various "things" I've owned over the years; drivers; applications; firmware updates; GPS map updates; etc. My "software" archive sits on a 1.5T drive. Music is either on a 500G or a 250G drive. My "OS" archive on a 250G. The archive of "ClipArt/Images/3D Models" another 250GB. Fonts, sound clips, audio libraries, CAD libraries, etc. I still have tens of thousands of pages of paper documents that have to get scanned, etc.

And, note that I haven't mentioned *any* video!

There is a *huge* advantage to moving from optical media to magnetic media. Try to make a "backup" copy of your DVD/CD archive (on optical media). Try *searching* it for something specific. This was the motivation behind my creation of the "devices" archive -- so I didn't have to manually look through HUNDREDS of individual CD's for each device, wonder if I happen to have found the "most recent" version of that medium, etc. (and, what about on-line information that I may have downloaded -- and burned onto blank CD's?)

I think you will find that optical media produced "at home" has a much lower lifespan than you think! Do you keep yours in an environmentally controlled chamber? Moisture, heat, light? What do you use to provide redundancy -- a second set of media??

12Mb is a common "affordable" speed, here. Do you really need it? I've got to build a bunch of computers (PC's) for a local charity. How much time should I spend waiting for all the MS updates to be downloaded? I think the updates for MSOffice are close to 1GB. And you want me to do that at 125KB/second? 2 hours *just* for that set of updates? Plus a similar amount for the OS itself? And, still more every few days for virus updates, etc.?

What happens when I want to download a new copy of Atmel Studio (1.5GB)? How long do I wait before I can *use* it?

[In my case, I can walk to a neighbor's or the local library if I *really* need a fatter pipe]

We downgraded to a slower 1.5Mb link. It's fine for "email, WWW, etc.". But, I am keenly aware of the bottleneck as I download all these updates! Doing it at 150Kb would be unbearable!

OTOH, there is practically *no* cost to upgrading the fabric *in* the house from 100Mb to 1Gb (I did this a few days ago for free -- the switch was a rescue so only cost me 2x10 minutes in a car).

But needs *change* as you learn what the technology can do for you.

Decades ago, I used (external) SCSI disk drives as a form of SuperSneakerNet. Copy 4G onto a disk, unplug the drive and attach it to the target machine and copy the data *off* the drive. A great way of moving large chunks of data when the only other alternative was a 10Mb network! With 100Mb fabric, you can effectively copy USB2-to-USB2 over the network. With 1000Mb fabric, USB3 peripherals can talk to each other over the wire. I.e., you can truly have a "remote disk" that behaves like a local one.

With 10Mb fabric, you don't consider streaming audio and video on the network -- along with unconstrained "data" traffic. At

100Mb, its not even an issue.
Reply to
Don Y

Hi Boudewijn,

[Check your mail]

"Business" (and management) imposes *their* values on these decisions. I'm asking *designers* to reply from the perspective of their (designers') perspectives.

E.g., some folks *do* expend extra effort on Design A knowing that (sooner or later) a Design Q is going to come along and they'll be able to leverage their present efforts, there. (I don't think people who design these sorts of devices are fond of doing solving the same problems over and over again! Otherwise, they'd delight in mowing the lawn, weekly, etc. :> )

The system wouldn't have improved had the technology not been incorporated into it! Embracing a technology also makes it an active part of your enhancement process. Why write a floating point library if you don't expect to leverage that in the future and *improve* it in the future (or, do you think at that future date, your *initial* adoption of a floating point implementation will magically include the enhancements that you would have "discovered" from actual use in the earlier deployment?)

Which? What shortcuts do they choose to implement to "make it practical"? What does that cost me as a "user" (DB designer) that I might not yet have decided I want to sacrifice? (see email) Here, I have a dedicated machine that implements my RDBMS -- and nothing more! I'll figure out where the *real* shortcomings are, later -- once I know how I *want* to use its capabilities (again, see email).

For example, most COTS RDBMS's don't like the idea of "read only" data stores. I.e., store the database -- or portions thereof -- *in* ROM. They expect it to reside on writeable media EVEN IF THE CONTENTS ARE NEVER WRITTEN (after initial creation).

E.g., a "telephone book" -- produced annually (and only updated that infrequently!) would have to reside on writeable media even with the foreknowledge that it WON'T BE CHANGED for 12 months!

But people get caught up in these sorts of things! The company in question had invested a fair bit in some really clever tools when tools of their capabilities were not commercially available! And, was resistant to leave them behind when *better* tools were available OTS. "Let's keep arguing about some issue -- that is no longer pertinent!"

If this was the 70's, one might say: "Gee, I wish I didn't have to deal with 6 character identifiers, memory maps and all this ASM. It biases all our future designs to some particular manufacturer's family of processors -- despite the fact that their current offerings may not address the direction our products are headed today. Why can't I write in some higher level language that allows me to concentrate on what I want to do instead of coaxing the processor to doing things according to *its* rules?"

Today, you might bemoan the fact that network protocol stacks (and the design of the networks themselves) don't inherit the priorities of the tasks that drive each connection through those stacks. I.e., that a high priority task can be locked out of the network by a more responsive LOW priority task.

Or, that address spaces aren't appropriate for the HLLs that now use them.

etc.

Agreed. But, it's not just the definition but the awareness of those definitions. Do developers *know* what the dark corners actually are -- so they can avoid them? Or why they exist? Or whether they can be removed??

Of course! My point is a well-written spec should act *as* a contract. It should nail down all the pertinent details. And, by the same token, tell all parties that "anything left unspecified is DON'T CARE". In reality, much of the unspecified is "I REALLY

*DO* CARE!!"

Exactly. Or, when you have folks you aren't even aware *exist* working on interfacing to your product/device/system. Esp if their "performance" can reflect badly on your "product".

This is some of the rationale behind certifying developers for certain platforms: you don't want your "brand" damaged by a bunch of substandard offerings trying to profit from your popularity/ubiquity.

Exactly! Again, see email.

Do you spend the extra MIPS on silly "fluff" (animated icons, fancy wallpaper, etc.) or on stuff that improves the product's "correctness", robustness, reliability, etc.

I.e., the goal should be for "panic()" to be elided from the system image! *Everything* should be "anticipated".

Reply to
Don Y

Well said, sir.

--
Les Cargill
Reply to
Les Cargill

That was not clear from your OP. Not all designers use their designers' perspective by default and non-designers can also have a designers' perspective.

Right. I guess I was only trying to illustrate that it's not a guaranteed way of improving the system and that it still requires a decision that the risk is worth it.

For example, ENEA's Polyhedra.

Given an ACID-compliant RDBMS with SQL engine, which allows the data store to be on non-volatile media, which sacrifices are unwanted?

Not everybody has that luxury.

Sure, people get caught up in all kinds of psychological pitfalls, including the sunk cost fallacy. Workers and bosses alike. Sure, we should be appropriately critical of all spoons fed to us, but if you want folks to think outside the box or to disregard company policy, monetary costs etc, then it is best to mention that.

You illustrated your original question even more, but didn't provide a straight answer. Are you trying to say that you are wanting to become part of some grand future paradigm shift? Or just curious about trends?

Is it worth the effort to know all the relevant ones? How do you know which ones to know?

Is it worth the effort to ponder about this?

Often times the project team(s) need to start work before the spec is completely fleshed out. Often times people change their minds at various stages about details of varying pertinence. That is why we have tools and methodologies as directly below.

Dunno about that, hardware can fail in pretty mysterious ways. You can expect hard drives to crash, charge cells to flip, but when the system is introduced to a too-hostile environment or is kept beyond its intended operational life, a panic() is perfectly acceptable. Even the existence of a lot of calls to panic() in the code is an indication that lots things are being checked against, but I agree that the panic() should usually be replaced by something more user-friendly.

--
(Remove the obvious prefix to reply privately.) 
Gemaakt met Opera's e-mailprogramma: http://www.opera.com/mail/
Reply to
Boudewijn Dijkstra

Of course! I'm expecting people to "have vision" as to where things

*could* go if not constrained by criteria imposed "by others" (The Marketplace, Management, Profit, etc.). E.g., my "gamble" on using an RDBMS in an embedded system -- despite the (high) costs that come with its use. The costs of an RDBMS will only begin to approach the "requirements" of an embedded environment when they are *applied* in that environment. And, developers will only understand the capabilities (and limitations) that their use entails when they get a chance to actually *use* them in "real" systems.

I'll look at it. If it's a "commercial" product, then it won't fit my current needs (I want everything to be open source; my goal isn't to act as someone else's shill :> )

I'm currently using PostgreSQL which has much of what I need/want. But, a high "price tag" (in terms of resource utilization). I recognize that a good deal of this is adjusting my mindset from the "penny-pinching" approach where every resource is questioned and optimized away. So, I am hoping (over time) that the advantages ("value") of this approach overwhelms its "cost" (e.g., like coming to realize the "cost" of an HLL is *usually* outweighed by the benefits its use brings to a project).

I don't want something that just *looks* like the above ("smells like a rose...") shoehorned into someone's idea of what an embedded environment *should* look like (e.g., physical memory should have no bearing on the limits imposed by the RDBMS). I want to be able to trade resources for performance, not "capability".

E.g., with PostgreSQL, I keep venturing further into the capabilities of the RDBMS (custom data types, stored procedures, etc.) instead of just using it to "build/link tables".

That's the point of the question! Shed the constraints you've been asked/forced/selfrestrained to design within. Then, "What could you do *today* to make *tomorrow* more 'productive'?"

"If I had gobs more RAM..." "If I could afford 'structured storage'..." "If I had a language that supported multiprocessing..."

In my case, I decided the RDBMS concept had merit. And, I wasn't going to "prematurely optimize" that component before I explored what it *might* be able to do for me ("my designs") as well as their costs.

E.g., a painful lesson I am currently learning is that it is hard to *explicitly* design DB "applications" and control what gets cached and how/where.

(Think of the variety of different storage media you can encounter in an embedded system -- "ROM", FLASH (NOR vs NAND), BBSRAM, SRAM, DRAM... and that's just semiconductor memory! Each has different cost/power/speed/durability/etc. characteristics. OTOH, the RDBMS world tends to think in terms of "main memory" and "secondary store")

Note my first sentence: "While this may sound like a 'musing', ..."

musing: a calm, lengthy, intent consideration

I.e., *think* about the "frustrations", "if only's", "it's too bad's", etc. that you encounter when designing and what could, possibly (i.e., in *your* opinion) take them out of the equation. Remove the constraints that are imposed (from above, outside and within) on your design and consider what it *could* be "if only..."

I don't want to bias a discussion! The world (environment, application domain, etc.) *I* work in is, I suspect, very different from the world other folks work in.

E.g., I don't have to sit at a desk and "be productive" on a timeclock. If "now" is a "bad time" for me (wrt some design activity), I don't

*have* to "make the best of it". I can work on the car, take a nap, do some shopping, watch a movie, etc. And, when I can be "most productive", *then* I can get back to work.

In some indirect way, this "improves" the products that I create by making better use of my capabilities, "attitude", etc.

But, that's not what I'm interested in with this question -- cuz altering *your* "working conditions" isn't something that *I* can exploit in *my* designs! :>

I'm not interested in being a "visionary". Nor particularly interested in which direction the "flock" is headed (as they may be following the wrong shepherd!). Rather, trying to see those things that "complicate", "compromise", etc. projects THAT CAN HAVE A TECHNOLOGICAL SOLUTION.

(Ages ago, I worked with a buddy on a project. I would pump him for ideas as to what I could do with the hardware design to make his *software* life easier. I had many suggestions for specific, custom "features" that could make dramatic improvements in performance for very little money. E.g., something akin to implementing LISP's "cons" in hardware. His answer: "double the CPU clock frequency". In his mind, the project *would* work on the current hardware, budget, etc. But, doubling the clock frequency would mean he didn't have to count clock cycles anymore!)

[smaller APIs]

Obviously, you need to know all that apply! :>

First day of one of my earliest classes, the professor told us to

*read* The TTL Databook. Class chuckled thinking it was a joke (sort of like asking someone to read a phone book or a Sears catalog). His point was that you needed to know what components were *available* before you could know *how* to use specific ones. ("Yeah, I know you'd like a fizzlebang but we don't have the capabilities to do full customs, here. And, there aren't enough weeks in the course for you to even *try*!")

People get into ruts and keep using the same "parts" (ideas, etc.) for each problem/solution. And, tend to forget that there are other parts available!

Why do buffer overruns continue to be a source of vulnerabilities in designs? Haven't we learned, by now, that this is a problem? Why are the same mistakes being repeated -- instead of new ways

*learned* to avoid them?

Sure! They expose how and why those particular algorithms fail! And, if they are "dark corners", they are the sorts of things that probably will be REALLY HARD to figure out later when you encounter them.

E.g., the misconception that using floats solves all your numerical problems ("What do you mean, 'cancellation'?"). Or, that you can run a signal through a clocked register/FF to synchronize it ("What do you mean, 'metastability'?")

You don't have to *dwell* on them in every design decision. But, you need to be aware of them so that, hopefully, some little daemon running in your subconscious recognizes the potential for one of these issues at some place in your design and alerts you to examine things in greater detail (instead of waiting for that latent bug).

[I am always amused by how readily people *dismiss* bugs that aren't repeatable! "You *saw* it! What do you mean, 'a fluke'? Are you claiming *gremlins* are responsible? And, that it CAN'T HAPPEN AGAIN? Despite the fact that it has already happened??!"]

But that's the fallacy! How can you think you can do *real* work without knowing what your actual *goal* is supposed to be? When you make your change(s), you're hoping that people can accurately predict what needs to be changed that has already been done! Yet, those people couldn't predict what needed to be done in the first place!

Without specs, you make assumptions. Chances are, you don't

*codify* those assumptions in a formal document (a "specification after-the-fact"?). Instead, you *hope* -- when you are asked to make some change -- that you can remember all of the assumptions that you made *and* be able to analyze how each of them can affect this new set of design criteria. Chances are, you're going to miss something: "You *expect* bugs in software". Sure! I'd expect lots of medical malpractice if a surgeon didn't know what his goal was when he started cutting! :>

"panic()" is the "catchall for the stuff you didn't think about" (or, the things for which you couldn't "practically" come up with a good solution). Like not testing the return value from malloc(): "Shrug. If we run out of memory, there's nothing I can do so why bother testing for it?"

Do you know what the maximum stack penetration is for each thread in your system? Or, do you just live with a default? And increase it when you encounter a stack problem? How do you *know* that you won't have a problem the day after product release? Do you *know* that you have enough heap? Can "/* CAN'T HAPPEN */" actually

*happen*? Or, is that just more of the "you expect software to have bugs" mindset? :>

I guess that's the *beauty* of NOT having specs -- you can always claim that "no one TOLD ME what to do in that case"! :>

Reply to
Don Y

Correct. That is where the "consume mania" is most readily observable so I focused on that.

Credit cards? This has very little to do with embeded computing (or not - that is who the customers are, after all) but easy credit is a very important player in the market these days and - if you want to be holistic and look at the whole picture - you just need to take that into account.

Reading this makes me weep. But then I think to myself - if noone is buying... what else can you do?

AKA, the network interface is not a figure of merit anymore. It used to be.

If you buy a PC on credit, you absolutely have hooks. Not everyone does that, aspecially now that the Great debt repay is on but I'm sure at least one IT corp is profitable solely because of buying on credit.

I don't know... I suppose I have to acknowledge what you wrote previously

- namely that vendors set the value of a product - *BUT* only for this class of cases. Cases where there is nothing significant differentiating between products and the sale is predominantly made based on showmanship.

In a similar venue, I have a huge glut of FOSS code. However, I would classify this is a "niche use". Actually, no. I would classify it as belonging to a different market than the mass consumer market.

I normally make a catalog of everything on my disks and keep that on the HDD. The DVDs I just put somewhere aside. Honestly, what I would most like is one of those robotic archive devices. :)

Oh right. Binary updates. Sorry, couldn't see that problem from up here. :)

With that I agree.

Honestly, it appears to me that we have two completely diverging policies of utilizing technology. In my case, most of the numbers you mentioned here would be unutilized. The highest network utilization I ever even thought about was in using the X system over the network. And even then I concluded that about 5Mb/s would probably be good (note that I never tried this due to lack of time). Literally the only thing I can think of that can possibly fill up these numbers is Plan 9 from Bell Labs, bashed on by the entire family, and *even then* I am sceptical.

Reply to
Aleksandar Kuktin

Correct. That is where the "consume mania" is most readily observable so I focused on that.

Credit cards? This has very little to do with embeded computing (or not - that is who the customers are, after all) but easy credit is a very important player in the market these days and - if you want to be holistic and look at the whole picture - you just need to take that into account.

Reading this makes me weep. But then I think to myself - if noone is buying... what else can you do?

AKA, the network interface is not a figure of merit anymore. It used to be.

If you buy a PC on credit, you absolutely have hooks. Not everyone does that, aspecially now that the Great debt repay is on but I'm sure at least one IT corp is profitable solely because of buying on credit.

I don't know... I suppose I have to acknowledge what you wrote previously

- namely that vendors set the value of a product - *BUT* only for this class of cases. Cases where there is nothing significant differentiating between products and the sale is predominantly made based on showmanship.

In a similar venue, I have a huge glut of FOSS code. However, I would classify this is a "niche use". Actually, no. I would classify it as belonging to a different market than the mass consumer market.

I normally make a catalog of everything on my disks and keep that on the HDD. The DVDs I just put somewhere aside. Honestly, what I would most like is one of those robotic archive devices. :)

Oh right. Binary updates. Sorry, couldn't see that problem from up here. :)

With that I agree.

Honestly, it appears to me that we have two completely diverging policies of utilizing technology. In my case, most of the numbers you mentioned here would be unutilized. The highest network utilization I ever even thought about was in using the X system over the network. And even then I concluded that about 5Mb/s would probably be good (note that I never tried this due to lack of time). Literally the only thing I can think of that can possibly fill up these numbers is Plan 9 from Bell Labs, bashed on by the entire family, and *even then* I am sceptical.

Reply to
Aleksandar Kuktin

Problems with the newsreader/link. Please ignore double post.

Reply to
Aleksandar Kuktin

Problems with newsreader. Please ignore double post.

Reply to
Aleksandar Kuktin

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.