cool article, interesting quote

Might you want to use one of these?

formatting link
formatting link
:-)

Cheers! Rich

Reply to
Rich Grise
Loading thread data ...

Fully understood - my experience has been similar.

I've found that many people confuse C++ with OO. It's not the same thing. C++ is *one* implementation of OO in a C-based paradigm, with loads (IMO) of pathologies. I think it's an actively badly-conceived language.

Yes, I know what you mean. However I honestly feel I can do a better job of OO in pure C than by using the provided (mis)features in C++.

To echo John Larkin: private definitions in a header file? No thanks. (No wonder C++ people moan about recompilation times.) Late binding? No thanks. (I'd rather let the compiler do the work once, not the runtime every time.) Implicit addiction to the heap via "new"? No thanks. (I rarely use a heap at all, except - if I have to - at start-up; and virtually never in embedded work.) Namespace mangling? Er, no thanks.

And there's a difference between abstraction and inherited-to-such-a-ludicrous-extent that one has to have every header file in the entire system open, and a good memory, to have any clue about what the code is doing. I've seen projects where the injudicious use of a certain instantiation has literally doubled the code space, and/or inserted HEUGE amounts of processing between two lines of code - and the programmer was unaware. Not good. [1]

I strive towards *more* clarity, not less. I always figure that if I (or anyone later) can't understand the code, what chance does the poor dumb CPU have? ;)

[1] Yes, I know, the right tools in the right hands ;). But people struggle with C, fer crisakes!

In general, I agree. However I've worked with some sloppy EEs too ;). And some EEs are reluctant to really learn about the craft of s/w engineering, presumably because they feel they've already done all their learning...

Steve

formatting link

Reply to
Steve at fivetrees

It's obvious that you have a lot of experience in C/C++, so I will write comments in manner that others who spend their time in the bowels of 8-bit micros can understand.

As mentioned before, I deplore the oh-i-don't-know-let's-just-make-it-a-blob-for-now-and-worry-about-details-later style of programming. I also deplore using the "new" operator for allocating memory on the heap willy nilly.

However, certainly you will agree that the concept of auto-contruction and destruction of objects is fundamental. Think how much malloc'in and free'in you would have to be doing without this feature. And containers are critical to creating "structure". They are what helps you make your "sub-circuits".

The ability to be able to do true assignment of objects is very nice. In C, this simply isn't possible. You cannot say a = b and cause a to assume the value of b in C if a and b are objects of the same type and managed their own memory within their structure. It is simply not possible. In C++, quite possible and even encouraged. I cringed to think of what I would do without this. I'd be curious to know what you do in this situation.

-Le Chaud Lapin-

Reply to
Le Chaud Lapin

A heck of a lot more than that at higher 240VAC, but it isn't really the 'fuse' conducting the current, more like a direct arc between the end caps with whatever was inbetween contributing to the ionization.. Especially the smaller 20mm ones-- the 1.25" ones are better.

Best regards, Spehro Pefhany

--
"it's the network..."                          "The Journey is the reward"
speff@interlog.com             Info for manufacturers: http://www.trexon.com
Embedded software/hardware/analog  Info for designers:  http://www.speff.com
Reply to
Spehro Pefhany

I'm rushing right now, so daren't give this the full answer it deserves. But, briefly, this section of this thread had me last night checking back my notes and rechecking the definition of "OO" - and for a while thinking that I had it wrong, and my approach was more "object-based". But no, I really do mean OO.

However: earlier comments and your question re a = b, and all the C++ action that results, got me focussed on dynamism rather than simply OO. In an embedded project, dynamism is something I actively avoid. (In a desktop application, I don't and often can't.) This is - perhaps - one of the reasons I dislike C++ - it is a very sharp tool for managing (and hiding) dynamism. (In case I just invented that term, I mean the use of dynamic objects - created and destroyed at runtime.)

Checking back through various definitions of OO, there is - perhaps - an implied dyamism there... If so, my approach is mostly to do with a static OO paradigm. Hope that makes sense ;).

Steve

formatting link

Reply to
Steve at fivetrees

Assigning a processor for each device ("interrupt") would really be a nice things, since now you could write simple busy-loop style programs constantly polling the HW registers instead of writing complex interrupt handlers that must restore and save the whole "program" state at every interrupt, making the code quite complex and thus error prone.

In fact this is not anything new, look at the IBM mainframes, which contained a lot of I/O processors in the SNA architecture. For instance the remote terminal concentrators contained much of the intelligence of the block mode terminals, which only sent the modified field to the mainframe when you hit the Send button, a predecessor to HTML forms :-).

Anyway, the raw computational power of many IBM mainframes was quite minimal for that era, however, trying to port these mainframe applications to single CPU platforms, such as VAXes, ended in many cases to the I/O bottle neck, since the main CPU had to deal with file system management and indexed file processing and file (de)compression.

Since it is currently possible to integrate a huge number of transistors into a chip and the main problem seems to be how to use them effectively, I would welcome the idea of using dedicated trivial processors for specialised tasks, such as I/O.

Paul

Reply to
Paul Keinanen

So you do the macro substitution. ;-) I build libraries of all things that might be useful in the future. Design them well, document and test the hell out of them. Then when I need that function again I don't have to reinvent anything (it may have been fun the first time, not so again).

I did that with a macro on the eprom burner.

Works for me, though coders need only be average if the design was done right from the beginning.

IBM used to have programming techs (many were retrained secretaries). They're the ones who did the actual coding, after the professional programmers wrote the specs.

Have fun.

--
  Keith
Reply to
Keith

Suppress All Runtime Checks - Reduces code size by suppressing all automatic run-time checking including numeric checking. Suppress Numeric Runtime Checks - Reduces code size by suppressing two kinds of Numeric Checks for the entire compilation: divsion_check and overflow_check.

These are NOT language syntax extentions, they are code generation options (but they do affect the normal Ada semantics). I assume that one would do these optimizations after extensive testing with the checks turned on and then more extensive testing after they are turned off.

Reply to
Marco

We don't try to insulate ourselves from reality with "abstraction". We don't need to construct and destruct objects because we already know what we have. We know the address of each byte of instructions and data.

When your "object" is a 64" CNC mill, assigning "A = B" could be disastrous. We have to know where each component is at every instant of time, and how fast it's moving, and what happens when it runs into the stop, or worse yet, scraps the part.

Abstraction is nice for eye candy, but I have yet to see a way to make it get any real work done.

Hope This Helps! Rich

Reply to
Rich Grise

I had an interesting conversation after I gave my talk at the Embedded Systems Conference. Normally I like to have a numeric library that does fractional arithmetic, but which saturates answers on addition and multiplication -- because in a control system 0.5 + 0.6 = 0.999 may be bad, but 0.5 + 0.6 = -0.9 is much, much worse.

At any rate, this guy came up to me afterward very puzzled about why things would ever overflow if you were doing adequate testing. It turned out in the conversation that all of the software development he'd done had been for DO-178B level A stuff -- this is the level of scrutiny you give systems if a software error will lead to a smoking hole in the ground with 100 dead bodies. Development under this level of certification usually ends up costing 2500 times more than development using good commercial software practice, because of the amount of review and verification you need to do.

I had to explain to him that 'adequate' for a machine tool wasn't nearly as stringent as 'adequate' for a fly-by-wire system. I think we both learned something.

So yes, you could expect that if you were going to use these optimizations the whole verification process would be quite extensive.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Posting from Google?  See http://cfaj.freeshell.org/google/
Reply to
Tim Wescott

Virtual has justified itself. It's the thing that allowed unlimited code bloat, that allowed a word processor or an image viewer and their dll's to hit 100 megabytes or more. Word is slow, buggy, and infuriating. EDIT.COM is a faster and far superior text editor, at 62 kilobytes. (But it ought to be good... Microsoft contracted it out.)

When vm was announced for the s/360, IBM was publicly predicting paging ratios of 200:1. A few years later, a survey showed that the average site was running 1.2:1. Meanwhile, the price of core had dropped dramatically, to under $50,000 per megabyte.

John

Reply to
John Larkin

I met a guy in this ng, a fairly recent ee grad, who complained that most of his college courses were too abstract, cs and digital theory and stuff, and that he took to hanging out with the tech who maintained the labs, so that he could learn some real electronics. I hired him, of course.

John

Reply to
John Larkin

This is done now to a small extent with the MPC555X processors, which have multiple simple RISC processors (eTPU's) running concurrently with the main power pc processor all on one chip, there is only two or three eTPU's so they are replacing specialized I/O hardware for now and work very well. But what if I had 100 or 1000 eTPUs (or ARM's) ? I'm not sure how I would use them because I don't know how to "think" that way, but the concept certainly is interesting. I suppose one option is to used each processor to run a software module, all code is now running almost instantaneously at the same time, and for all practical purposes I'm designing like a hardware engineer now.

Reply to
steve

And you can now get an ee degree from the majority of US universities without having to study electromagnetics. This is very Zen: electrical engineering without electricity.

But I guess I shouldn't complain; competition isn't a serious issue in my business.

John

Reply to
John Larkin

I have a little math package, for the 68K, that I use when fractional becomes too much of a pita, and I'd like to work in real engineering units everywhere. The format is 32.32, a 32-bit 2's comp integer plus

32 bits of fraction. That turns out to work for any real-world (ie, excluding astronomy) system. My 'floating accumulator' is register pair D5 (integer) and D6 (fraction) so fix/float conversions take zero time and limit checks are usually just a compare onto D5. Add/sub operations are fast, since no normalizing is needed. Mul is fast by partial products (CPU32 has 32x32==>64 mul) and only divide is a nuisance. I should write a Newton's method reciprocal one of these days and use that instead.

It all saturates, which avoids embarassments. 0/0 = 0 and k/0 = sgn(k)*infinity, where infinity is 7fff:ffff.

John

Reply to
John Larkin

And getting equivalent reliability.

John

Reply to
John Larkin

oh-i-don't-know-let's-just-make-it-a-blob-for-now-and-worry-about-details-later

Hi, Rich,

I once wrote a compiler (it ran on a DEC timeshare system) for a

30-foot-long Whitney nc punch press. It would sling 20x6 foot sheets of steel around like looseleaf paper, whacking great holes anywhere it liked, shaking the building at every hit. The machinists watched me pretty closely as I debugged it.

The complier let the shop guys read fab drawings and create and edit source files in a friendly language, and it spit out the g-codes and stuff on paper tape. The language had some basic variables and some macro-like "pattern" features, and we found that the machinists evolved some very sophisticated programming techniques using this very limited tool set.

After this, if I ever needed a hole drilled or a piece of dirt bike welded, it just got done.

Nobody, but nobody, should be allowed to graduate from high school without basic machining and welding skills.

John

Reply to
John Larkin

I use saturation, not for compensating for firmware errors, of which there are none, but for handling normally expected (perhaps rare, but partg of the design specification) conditions such as startup, broken sensors and output saturation.

Best regards, Spehro Pefhany

--
"it's the network..."                          "The Journey is the reward"
speff@interlog.com             Info for manufacturers: http://www.trexon.com
Embedded software/hardware/analog  Info for designers:  http://www.speff.com
Reply to
Spehro Pefhany

That sounds workable. I cringe at the use of clock ticks, but otherwise it sounds cool.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Posting from Google?  See http://cfaj.freeshell.org/google/
Reply to
Tim Wescott

oops, infinity is 7fffffff:ffffffff of course.

John

Reply to
John Larkin

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.