cool article, interesting quote

From that article:

==============================

An interface class in C++ is a class that has only pure virtual member functions. An interface class defines the member function signatures of the interface methods. A class implements that interface by inheriting the interface class, and then implements each of the pure virtual member functions. For example:

//CallSwitch.h class CallSwitch { public: virtual void Connect(User* a, User* b) = 0; };

// CircuitSwitchImplementation.h class CircuitSwitchImplementation : public CallSwitch { public: virtual void Connect(User* a, User* b); //implemented in the Cpp file };

==============================

No, thanks.

But the empirical evidence is overwhelming; Microsoft, Oracle, SAP, Adobe, Norton. Can *anyone* write big software systems that aren't a mess? If decoupling doesn't work on tiny systems, and it doesn't work on big systems, does it for some reason work in the middle?

Modularity is necessary to manage big systems. If it's used to hide from reality, it becomes "abstraction." Most of the articles praising OO concepts and abstraction point out its chief benefit to the programmer: you don't have to think about, know about, or bother your pretty head trying to understand all that nasty hardware.

John

Reply to
John Larkin
Loading thread data ...

Yeah - me ;).

Decoupling *does* work. It's a *lack* of decoupling that causes trouble i.e. unexpected side-effects.

There's nothing too much wrong with OO or abstraction (other than the hype - it's such good modularity under a fancy/fashionable name). And yes, one always has to understand all that nasty hardware, and everything else - just not all at the same time. That's the point.

Steve

formatting link

Reply to
Steve at fivetrees

None so blind as those that will not see.

--

Reply to
nospam

One time many years ago, we were looking for vendors to build several of our instruments for internal use in a large semiconductor fab.

A colleague of mine produced a very amusing chart by plotting the bid prices vs the number of employees at the vendor. We had 5 bids. The price went as the square root of the people, over a factor of 3, with an R^2 of about 0.8.

Cheers,

Phil Hobbs

Reply to
Phil Hobbs

Gosh, you wouldn't like my code. It's maximally flat; everything is global; nothing's a subroutine if it would only be called once; inline routines run for tens of pages, ripping straight through; lots of those unmentionable g*t* things; more comments than code; paged to be printed on fanfold paper.

Ok, the context: I own a small business, with about 5 engineers and 20 employees. Survival depends directly on engineering productivity: we need products we can sell. Too often, I'll see an engineer doing something that's cool, cutting-edge, challenging and fun, that has

*nothing* to do with getting new products into production.

"We should be using the latest Coldfire CPU and flash instead of

68332's and eproms" becomes a negative-revenue project with a life of its own. For no rational reason, but with lots of unintended consequences.

"We should start using xxxx operating system/language/methodology." Why? "well, because we might need it someday." Not today.

"We should roll the artwork on this board to use that new clock driver chip." Have you lost your mind? Spend two months and $20K to save twenty cents per board? For something that works fine already?

So, engineers and especially programmers are constantly distracted by pretty, shiny toys and less attracted by the relative scut work of designing working, reliable, manufacturable, and *finished sellable* products.

You seem to have your own business, so you have a strong incentive to watch the bottom line (ie, paying the bills) and drive towards the goal of getting stuff built and invoiced for. I have employees for whom that sort of motivation is more abstract, and who sometimes make bad judgements as to what's a good investment. So I tend to be skeptical of fancy, indirect, trendy ways to do what, with some reflection, often turns out to be a surprisingly simple problem.

Lots of things - logic, algorithms, circuits - turn out to be simple if you think about them a while. But too many engineers and programmers conceive bizantine 10x-overkill architectures to solve a given problem, and then want to apply massive, fashionable, resume-enhancing tool-sets to the solution. I guess I'm sensitized to that.

John

Reply to
John Larkin

This works well in instrument designs, where you have your arms around all the technology involved, and especially when your design is single-threaded (apart from interrupt handlers). I do things the same way there, though because I don't do enough embedded work to be really expert, I usually use C.

There are lots of cases where you just can't do that, and it's getting dramatically worse with the change in the way computers are designed. In 10 years or so, you're going to see single chips with > 100 processor cores, running at ~6 GHz each, with seriously nonuniform memory access. At that point all the inter-process timings are stochastic, and there just isn't a flat structure that can work. (Symmetric multiprocessors are duck soup to program, but for an N-way SMP the cache consistency bandwidth scales at least as N**2, and there's a limit to how long you can play that game.

I have some of those problems already in programming clusters, but it's going to get a lot worse. Fixing that problem is one of the reasons for the on-chip optics work we're doing.

Cheers,

Phil Hobbs

Reply to
Phil Hobbs

Hey, all I do is design and build and sell very-high-margin aerospace electronics. Lots of it. What the hell does inheriting classes of pure virtual member functions have to do with that?

John

Reply to
John Larkin

14k lines in a single file is perhaps a bit too many, I tend to keep large source files between 1 and 3k lines. But then I do use a linker :-).

If language means "single source file" vs. the rest of the options, I guess you will have to. But it by no means has to be C. My tcp/ip subsystem (I have yet to understand why people refer to this as a "stack"...) is written in VPA; about 1.5 M source text, (in about 150 files) DNS, FTP, SMTP included, the PPC code size is somewhat below 200 kilobytes.... That's if it connects via ppp, the ethernet takes another 30 kilobytes of code (and more - configurable to much more - buffer space, obviously). I wonder how these figures compare to other - C written - similar things.

I also wonder what CPU did you use in your older CAMAC boards, just curious about your background. I have not been doing CAMAC so there is no direct competition here :-). Nor do I have TACs etc., and you seem to not be making MCAs ... :-).

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

John Lark> >

Reply to
Didi

It's what you're going to be hearing from all those new college graduates / would-be employees of yours, John. :-) To many people all those buzzwords sound more impressive than, "I can build you a 1GHz clock source with 25ps RMS jitter..."

Reply to
Joel Kolstad

It's called a stack not because of any similarity to a CPU stack, but because it's layered. Think of the OSI 7-layer model:

formatting link
formatting link

TCP/IP is usually thought of as 4 layers:

formatting link

HTH,

Steve

formatting link

Reply to
Steve at fivetrees

Hi, Dimiter,

When I moved to California, and needed a job to support my intended lifestyle, I started working for Standard Engineering in Fremont, a CAMAC house. They later absorbed Transiac, changed their name to DSP Technology, went public, got into automotive testing, and moved to Detroit.

When I started my little company, I had some contacts at the national labs (Los Alamos, LLNL) so I did CAMAC for them until it sort of died.

The last CAMAC module we did was an ethernet crate controller, a bridge to allow people to keep using older CAMAC crates but dump the VAXes and serial crate controllers and stuff. We stuffed a PC/104 CPU and a standard Ethernet card inside the crate controller module, with a simple FPGA interface to the dataway. I had a friend (PhD in thermal hydraulics, now a fulltime programmer) write the internals in ansi C, using a public-domain C-source TCP/IP stack. It runs under rom-dos, and the executable is about 75K bytes. He's good.

Some of the really old CAMAC stuff used an MC6803 CPU. Lately we use

68332's, often with serious databashing help from Xilinx FPGAs. It's impressive what you can do with a hundred parallel 50-MHz multipliers and adders.

We do mostly VME and OEM boxes - NMR, lasers, ICCD cameras, occasional big-physics projects (Jlabs, SSC, NIF), stuff like that - lately, and some fiberoptics, anything that's hard.

Want a refrigerator magnet?

John

Reply to
John Larkin

Ah, thanks, that explains the name. I tend to think in terms of "things", which may or may not be as flat as layers, so I did not guess the name origin. The way I have done it, applications can ask an object - a tcp/ip_subsystem - to connect them to some port at some host (perhaps doing DNS lookup), then they get a tcp_connection via which they can transfer, below that is the ip_link (routing, fragmentation/defragmentation), below it is the ppp_link (or ethernet link), then the data_link (UART, whateved plain DPS device) etc. While I was debugging it, I had two completely independent tcp/ip_subsystems which communicated via ppp through a memory FIFO, I guess keeping things that encapsulated was some great help.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

Steve at fivetrees wrote:

Reply to
Didi

Eek! ;) Wash your mouth out this instant!

I can relate to all that. But there's a danger there of throwing out the baby with the bathwater.

Like you, I've worked with people who like new/shiny tools, and forget that the point is to *use* them productively. Like you, I have developed all kinds of defences against this syndrome.

And yes, I do run my own business. But even before that I was mostly motivated by the challenge of improving my craftsmanship. Before I became a product designer, I was responsible for designing/procuring test equipment. Fairly often I'd go back to our product designers for parametric data, only to find there wasn't any - it was a back-of-an-envelope job. Our production people had to deal with the resultant test failures, or the pots with way too much travel - and it was my job to sort this all out. I figured there had to be a better way. When I moved into R&D, I put this into practice. It saved time, and the production people were happier.

When we moved into firmware, the same thing happened. My colleagues were seat-of-the-pants assembly coders to a man, but I did a *lot* of reading, and quickly found that "structured design" (which was seen as a fad at the time by my colleagues) allowed me to a) deliver working code faster and b) sleep better at night through confidence that the code wouldn't come back to haunt me. I've continued reading and learning throughout my career.

I moved from assembler to C mainly as a way of losing my pseudo-code design systems. I had a brief honeymoon with C++ - I learned it, read about it, used it, and decided I didn't like it. Still don't. Nonetheless, object orientation (in its purest form) is a good idea, a no-brainer. These days I design OO structures in pure C.

My motivation in all this is not academic or new/shiny tools: it's productivity and maintainability. Some of my projects have been on-going for nearly 2 decades now; I frequently get asked to add a new feature to code I last looked at some years before. I dislike having to start again, so I've developed defensive means of being productive again from day 1. And I re-use code all the time.

Yes, there are distractions and fashions etc... but there are also some good thoughts. Stripping out all the hype, OO is nothing more than a means of formalising structured design to promote good modularity. This helps with productivity, maintainability and *reliability* - I just don't believe in bugs.

I've rambled a bit - apologies - but I wanted to show that we probably have more in common than not. And that my methodologies are not so weird - they're just perhaps a bit more developed - but through necessity, not through fads and fashions.

Steve

formatting link

Reply to
Steve at fivetrees

Then it should be called a "pile". ;-)

Cheers! Rich

Reply to
Rich Grise

Maybe we should change the way we look at processors. Why not have 256 or 1024 CPUs, with a CPU per process/interrupt/thread? No context switching, no real interrupts as such. Some of the CPUs could be lightweights, low-power blocks to run slow, dumb stuff, and some could be real number-crunchers.

The OS resides on a "manager" CPU, and assigns tasks to resource CPUs. It never executes application code, has no drivers except maybe a console interface, never gets exposed to nasty virus-laden JPEG files, and, one booted, its memory space is not accessable by any other process, even DMA.

And isn't it time we got rid of virtual memory?

John

Reply to
John Larkin

I suppose there are times when that works well. A price to pay is communication issues. However, I don't see the problem with context switching, either. So if there is enough performance in a cpu, you can achieve pretty close to the same thing without lots cpus and associated communications paths.

It takes me only a few minutes to write a cooperative task switch in assembly on most processors. An hour to test it, thoroughly. In less than a day, I can (from scratch) set up and support process creation for any valid C function (with parameter passing to start it), process destruction, precise sleeping with delta queues, safe message passing, and per-process exception handling (should that be desired, at all.) In C or assembly. (If you need pre-emption, that takes more resource and time and adds some risk regarding pre-existing libraries.)

But context switching isn't really much of a barrier to overcome.

Transputer comes to mind, the concept of RPC (remote procedure calls), etc. Some folks were working on an operating system that would automatically start out a full application running on a single CPU, but diffuse out the routines dynamically over a network of CPUs in a kind of simulated annealing process. Starts out CPU bound on the 1st CPU, notices that the communications bands are empty, decides to use the communication link to "send" code then starts passing parameters to the function calls that way. Eventually, the 1st processor decides that it's cpu-load and communications exchanges are in equilibrium -- occasionally accepting code from nearby processors, occasionally sending out code to nearby processors, always passing parameters around, etc. Each nearby processor doing about the same thing. Eventually, it all evens out -- never static, sometimes sending out a routine on a random basis, sometimes receiving one.

No idea what happened with the idea.

I like it when running NT-based Windows -- it helps isolate those darned programs from crashing each other. It also helps on such workstations so that each program can be compiled to a "standard" view of the computer system and in virtualizing resources so that each program can take a simplified view of them.

For instrumentation? Probably not so good.

Jon

Reply to
Jonathan Kirwan

The IBM Cell processor used in advanced game machines has one Power 6 core that controls 8 Synergistic Processors, arranged on a (iirc)

768-bit wide ring bus that clocks at 2.5 GHz (half the processor speed), with 1-cycle latency per step on the ring. It's about 1.5 Tb/s, not counting ECC bits. The SPs are single-instruction-multiple-data (SIMD), like an old Cray. Most machines have lots of userspace threads running at once, which is one reason that multicore designs improve throughput per transistor. It's the bigger jobs that really suffer, the ones you'd really like to run on some hellacious fast uniprocessor, but can't.

You'll see more of that sort of thing--but SIMD machines aren't good for everything, the way SMPs are.

Paging to disc is helpful to prevent big important things from crashing as soon as you run out of physical memory. It slows the machine to a crawl, of course, but that gives me the chance to kill RealPlayer or whatever's hogging all the RAM. You can turn it off if you like, but it's a nice safety feature and it doesn't cost much if you're not using it.

Cheers,

Phil Hobbs

(Off to raft on the Lehigh River with the Boy Scouts)

Reply to
Phil Hobbs

Hmm...I know what you mean about creating good form and being able to sleep better at night as a result. Many of the C++ people thought that the next step was to get rid of the exactness of a structure and replace it with something else (who know what) where it is "an abstraction". It turns out that this model is appropriate for some paradigms, but *highly inappropriate* for most. So C++ gets a bad rap because expert coders, thinking that using the whole pointer-to-abstract-blob mode is how OO programming should be done, subscribes to this model, not realizing that, in doing so, they are developing a really, really, bad habit, namely, relegating the search for form to the cosmos. Junior programmers here who good C++ is and try to follow suit, and end up making even more of a mess, as they are not yet able to deal appropriately with memory management.

But I wanted you to know that, as someone whose perspective on these things seams to overlap nearly entirely with yours, that move up to C++ really is worth the trip. If you use the structured programming aspect of C 90%+ of the time in C++, with member functions, and ony resort to the abstract, everything-is-vague-andI-don't-really-don't-know-what-i'm-doing-yet aspect 10% or less of the time, then it becomes very, very powerful. With it, you can conquer absolutely massive projects where the complexity scales just as it would if you were doing a typical all-digital board design. Also, EE's, IMO, are predisposed to being better programmers than many SE's, because they do not need to be reminded of the importance of good form. With SE's, it's crap shoot whether they are thinking about good form or not.

-Le Chaud Lapin-

Reply to
Le Chaud Lapin

I used to work for an electric forklift manufacturer. Interesting stuff - motor controllers that control upwards of

600 amps.

The DSP controller board and a bunch of the other electronics were fused by a small cylindrical fuse - I forget the exact size but it was one of those common ones that is a bit less than

1/4" in diameter and maybe 1.25 inches long. Oh, and it was rated for 5 amps.

One day I shorted something to something else and the fuse literally disappeared in a blue flash. Being an engineer, my first thought was "I wonder what the current got up to before that blew." :) So I drug out our Tek oscilloscope and current probe and put it on the wire, installed a new fuse, and shorted B+ to B- again. The current probe showed that even a 5A fuse can conduct almost 1000 amps for a few milliseconds.

Reply to
Carl Smith

Intel engineers dream about this each night before going to bed. The problem is that, at least for a general purpose computer, the threads must always interact. If there is a display that all threads share, they are going to have to talk to the software that controls the display. The inter-CPU buses become bottlenecks. Researcher have tried various > 2-dimension schemes with less than stellar succcess. Also, if you take a typical PC, like one I'm using now, and ask, "Is it possible to distribute all the threads so as to essentially eliminate their interdepence, the answere quickly becomes 'no'.

Yes, it's very frustrating. How beautiful would it be to see a board with a 16x16 matrix of high-performance CPU's. If you figure out how to do this, you're looking at 9 digit increase in income, at least.

Can't do that either. Fragmentation won't let us. The protection models require it. Again, there needs to be situations where one process shares memory with another.

For some applications, a fully saturated 32-bit bus with 4GB is still not enough. It should not be long before Dell and other PC vendors start full saturation on every PC.

There was a rumor that, in the mid 1990's when

formatting link
was competing as a search engine, it was the fastest around. People kept scratching their heads trying to figure out how they made it so fast until it was revealed that, in the addition to state-of-the-art algorithms, they simply kept all of their search data in 4GB of RAM at all times.

The new 64-bit processes are now talking about terabytes with same regard as we did about megabytes in 1980. Fortunately, Seagate and other vendors should be out with terabyte drives in next year or so. I'm just waiting to see who is crazy enough to stock a machine with

64-terabytes of RAM.

-Le Chaud Lapin-

Reply to
Le Chaud Lapin

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.