Re: Overview Of New Intel Core i7(Nehalem) Processor

Correct, but that wasn't my point.

My point was that specifying the execution order is counter-productive. Modern CPUs improve performance by ignoring the order specified by the machine code. But they are limited in the extent to which they can do this. Side effects must occur in order because they *might* need to occur in order (they might not, but the CPU doesn't know).

If you can design the language so that sequencing and side effects are only used when they're actually necessary, you give the system more freedom to choose the most efficient approach.

Reply to
Nobody
Loading thread data ...

in

of

You mean these?

formatting link

Look up:

Synchronization functions * __eieio, __iospace_eioio * __isync, __iospace_sync * __lwsync, __iospace_lwsync * __sync

...and that's the BE PPU API. Of course I was talking about the underlying PowerPC, which has the equivalent machine instructions.

How do you do it, AlwaysWrong? Just because you know a couple of TLAs doesn't mean you know jack shit. ...and you don't.

Reply to
krw

Apart from a few fractionation problems in the plasma or deep pits the physics for laser ablation analysis isn't too bad. Slicing and dicing the resultant large spatially resolved isotopic datasets is a distinctly non trivial problem though. ISTR there is a freeware Java app around written by one of the universities that does some of the job. It creates humungous datasets and is slow although I think that is largely an implementation problem.

Was it an imaging one a la Cameca or a point by point laser ablation?

It isn't using an uninteresting language that matters. It is having a language with the right set of tools for the job in hand. Fortran can be surprisingly good at handling multidimensional arrays of bulk data for instance and there are a lot of scientific libraries to support it.

I think you are barking up the wrong tree. "Nobody" was close to the mark with Haskell (which I don't like). We need a new generation of languages where the precise description of *what* has to be done takes precedence over the how. The compiler writers can then get on with turning that specification of the program into code that does the job. And with the right pragmas they can unroll loops and parallelise anything that isn't interdependent or marked as a strict serial sequential.

Modern CPUs are quite hard to keep busy on all pipelines without data stalls. This will only get harder in the future as memory subsystems already limit throughput on most data intensive tasks.

That tends to suggest you would benefit from development tools that help to isolate common human errors as early as possible. Even if you use nothing else McCabes cyclomatic complexity index or CCI shows you where you should be looking for suspicious behaviour. You can take a pretty good bet that once a routine goes beyond a certain complexity level it has the potential to hide bugs. Picking the right places to inspect legacy code carefully improves productivity. YMMV

Even in the real world it is reasonable to want to get the job done reliably and quickly with the least effort. The trade with the devil that has been done in shrink wrap software is for a rush to market with hacked and bodged code. I am not defending these practices at all.

I noticed on Sunday that European paper sizes in my new Word 2007 are completely random and bear no relation to the actual dimensions. It only works correctly when the paper size is set to US Letter (close enough to A4 not to notice) but disastrous when the output is scaled up to A3. What you see is never what you get. So much for Office 2007 :(

Today they are correct again and if I hadn't taken a screen shot of the failure I would have believed I was hallucinating. Sadly I did not dream it and the nightmare of Office 2007 lives on.

Why anyone would pay full price for it beats me completely.

Regards, Martin Brown

Reply to
Martin Brown

It works like this:

formatting link

Atoms can be ripped by pure pulsed electric fields, or by a combibation of fields and fs laser pulses.

Haskell is cryptic, dense, unreadable, literally "code." That's the recipe for errors and maintanance problems. It's an intellectual toy, a racehorse, when we need working dumptrucks.

We need a new generation of

Wrong, wrong, wrong. I make very few errors, in hardware or software, because I do most things the simplest, safest way and check my work carefully. I am responsible for my code and don't expect some automated checker to find my bugs.

John

Reply to
John Larkin

This kit is more suited to fundamental materials research.

The systems I have worked on are for rather more macroscopic pieces of material like rocks and have been used by some to reverse engineer chips. Ping it with a highly focused laser and analyse the plasma plume with quadrupole ICPMS or magnetic sector ICPMS if you are rich enough.

You are looking at it the wrong way. What we need is an unambiguous specification and statement of the computational problem where the compiler can find errors of omission and inconsistencies at compile time using proof like techniques and dataflow analysis. Mathematical logic is the only place where proof of correctness exists. For all I care the notation could be something graphical like Nassi-Schniederman diagrams - I was once a fan of them and I still think they have some merit. It comes the closest to expressing software as a circuit diagram that I know of. These days they are somewhat out of favour...

formatting link

The big thing in its favour is that the diagrams can be checked by a domain expert without them having to learn arcane programming syntax.

IMHO Z and VDM are closest to the future path for true safety critical development methodolgies. Praxis's high intergity Ada subset is another way that looks promising for practical applications. So far none of these methods has allowed a company to produce fast bugfree code to compete against the 1 million monkeys coding C/C++ model.

You mean like your impossibly fast converging fixed seed N-R sqrt routine that gets entirely the wrong answer for values that trigger the false solution sqrt(K*2^32+x) for any K. What was that? 12 instructions with one *major* bug in it returning results that are obviously incompatible with the invariant boundary condition that for input x

Reply to
Martin Brown

If you only speak English, Chinese is cryptic. If you only speak Chinese, English is cryptic.

Or did you simply fall for your own strawman argument:

?

Although the above isn't how anyone would write that function, it isn't actually *that* illegible to someone who actually uses the language regularly. But, really, if you want to discuss a language's syntax, discuss real code, not pathological strawmen; you can write unintelligible code in *any* language if you really want to.

Obviously, someone who has never used a language will find much of it cryptic, but that says more about the observer than the language itself. Before taking that road, be careful what you wish for: a combination of C's dominance and your logic has resulted in a great many people considering it an error to use anything other than braces for blocks.

Reply to
Nobody

Martin,

I think that everyone agrees that the goal there is admirable. However, the question is always, for any given programmer, will they produce code with a given level of correctness faster with or without such tools?

I suspect that in some cases -- perhaps, say, with John -- the answer might be that he's already found a near-optimal strategy for producing (essentially) bug-free code, and most tools you might hand him could actually reduce his overall productivity.

But if some of these tools plug directly into PowerBasic, I'm sure he'd at least give'em a spin. :-)

As others have mentioned, the interesting thing about software is that there's easily a 5:1 if not 10:1 difference in productivity between the best programmers and the worst. Tools and the processes companies institute around them that are meant to help weaker programmers *sometimes* do so at the expense of slowing down the more proficient guys (just as it's challenging to design GUI-based operating systems that are simultaneously expert-friendly while still being really easy for beginners to use). There's good reason that many companies support skunkworks groups, after all.

That being said, I always turn up compiler warning levels to their highest and have used tools such as the Nu-Mega/Compuware BoundsChecker to good effect. However, while these catch a lot of "dumb" mistakes, they don't really help in designing the fundamental "architecture" of a large piece of software, and this is where -- long-term -- your product tends to sink or swim.

---Joel

Reply to
Joel Koltner

What's the specific input integer that fails?

I did simulate it for accuracy over my known numeric operating range. I know that it doesn't converge for values that can't happen in my system. It hasn't caused any problems.

We have had bugs in a small fraction of our first-shipped embedded products, a couple percent maybe... less than a bug per year. And those few bugs have been mostly minor things. The culture of "all software has bugs" is self-fulfulling. The culture of "we don't allow bugs" can, done seriously, result in the great majority of products being shipped totally free of hardware or software bugs. The people who build jet engines understand this.

Most commercial software is, by my standards, garbage. PADS v5 seems to be bug-free, if sometimes arcane. Small nice apps like Irfanview and Crimson Editor and Appcad and LT Spice seem very solid, so we prefer them to other stuff... not to mention that they are free. The Xilinx stuff, Adobe, Microsoft, most of the packages written by large teams, are usually terrible.

Hmmm... it's looking like the more you pay for software, the more bugs it will have.

That's fine as long as it's a supplement to designing and reviewing the code carefully. No automation and no rational amount of testing can find bugs that result from incorrect understanding of the problem. If the availability of automated checking tools causes the coders to check their work less, they may make things worse. And a language and a compiler that allow bad coding - like wild pointers and memcyps - and that need supplemental checkers seem like a bad idea to me.

John

Reply to
John Larkin

I had nothing to do with C's dominance. I do think that programs that jam a lot of action onto one line, with operations being sequences of characters (as opposed to doing things in smaller steps, in languages with keywords) are inherently harder to decode. It's hard to comment a line that does many different things in a crush of short operators.

ADA is used for most life-critical aerospace apps, for good reasons.

John

Reply to
John Larkin

Larkin

time at

net win in

range of

reap the

it. I

because

writing

train

Quite the understatement.

and solutions.

ollapses-in-paris.html

=20

By track record billion dollar software failures are the majority from what i read, and mostly swept under the rug. Do you have comp.risks in your regular reading list?

This micro-property has nothing to do with the macro-property.

=20

dunno C++,

and) there are a zillion

other languages that

omnipresent.

uses gcc and libc...

do.

True, it has different wear out mechanisms, like APIs that become depreciated then obsolete (and a lot like some hardware interfaces that required software interfaces).

what could not be done,

would all of the sudden

a step by step way

software.

Consider the buyers in the two cases.

To really rub the point home, look at proliferation of HTTP based download managers (read data pirates) that are being created instead of using safe, well tested, reliable, and restartable FTP which is included on all current major OS's. What are those webheads thinking?

Guess why open source is getting so popular now.

=20

And there are bazaars that have been operating for over a thousand years.

Reply to
JosephKK

very high level

These processors only reorder in very minor ways that do NOT impinge on the effective imperative ordering of what was compiled. Presuming that the hardware is allowed to wholesale reoptimize / reorganize the program at runtime is a gratuitous overgeneralization. Indeed not even the compilers are allowed to do that. The hardware (based on the von Neumann model) is effectively imperative mood only.

Please do not confuse the properties of some high level programming language with the properties of the underlying execution hardware.

Reply to
JosephKK

You have asked this before - are you hoping for a different answer now?

formatting link

ISTR you were boasting about never having solved a quadratic when you originally posted this spurious claim of a far too fast converging N-R sqrt from a fixed seed of A=200. It will fail badly when the first iteration gives a value of the form N*2^16 (N>0).

Algebraically it is easy enough to prove it will happen first when

2^16 + 1 = (y+A^2)/2A hence y = (2^17+2-A)*A = (131072+2-200)*200 = 26174800

Successive iterations bounce between 200 and 2^16+1 forever. All input values above this are at risk of failing to converge to a true root.

You didn't when you posted it. That is a post hoc rationalisation after being caught out. Futzing about with the numbers was the phrase you used at the time - this does not inspire me with confidence.

Good for you. But I have a distinct feeling that your software has a low irreducible complexity compared to things like operating systems and major applications.

Problems in jet engine fadecs are not unknown. The UK currently has about 8 Mk3 Chinook helicopters that due to third rate f*ckwits specifying them are unable to fly safely in cloudy weather. Not good in the UK. They are still grounded.

formatting link

The Mk 2s were not much better. A known fundamental engine error condition probably cost around two dozen of our best antiterrorist security specialists their lives. The military enquiry blamed "pilot error" but no-one else does. When your best test pilots refuse to fly the thing putting one into service to move irreplaceable personnel on a single flight is utterly insane. When I was in a large company there was a strict limit on the number of senior personnel travelling together.

formatting link

The faster it has been rushed to market the more bugs you will have.

If the original specification of the problem is broken or inconsistent then all bets are off. A lot of the bigger software failures are in part down to sabotage in the initial requirements analysis.

But if the type safety is built into the language along the lines of a lightweight Ada and assumes that if a thing can go wrong it will then it is possible using the compiler parse tree to detect at *compile* time a lot of the situations that could lead to runtime failure. I see no point at all in letting something escape detection at compile time and I would much prefer to see it caught at the specification stage. That is why I believe we should be heading for languages that specify what to do rather than how to do it.

There are a whole bunch of practitioners out there that apply C casts to argument lists pretty much at random until the thing compiles. Scary...

Regards, Martin Brown

Reply to
Martin Brown

If you switch on all of the optimizations of gcc and use the "inline" and "combine", gcc will in some cases make sweeping changes to the code. Each step of the optimization is proven to produce the same result as the unmodified code. This means that it does reorganize but it does so without ever moving away from a step by step description of the program.

Reply to
MooseFET

If you want to download a single file, FTP offers no advantages over HTTP, is more complex, has more overhead, and makes it harder to secure both the server and the network. Also, the lack of metadata with FTP can be a problem for non-English locales.

Reply to
Nobody

That's silly. I obviously went to high school and college. How could anybody get a EE degree without ever solving a quadratic? What I said was that I haven't needed to solve one in a long time. Haven't since, either.

when you

Not forever; it only bangs 6 times. I've seen roots that had end tests and hung the machine because they never converged. That's one reason I did a fixed number of iterations.

All input

I just plugged that value, 26174800, into my simulation, seed = 200, and it converged monotonically to 5184 on the 6th iteration.

If you do many iterations, most inputs produce an output that orbits among a few close-spaced values, fine for a digital voltmeter that has a 10 second time constant and that's measuring messy MRI waveforms.

So far I can't break it. I don't understand the difference in observations, so I'll play with it when I get some time.

The sim works in long integers and seems to be an exact numerical equivalent to the assembly code.

The input range is bounded, both high and low, by the process. Like a lot of true RMS voltmeters, I check the averaged N^2 value before I call the square root, and just force zero if it's below some cutoff. On the high and, the ADC can only make limited numbers. The actual input range is between 10 and about 200,000. I suppose I could test every value in that range... it wouldn't take long.

I did simulate and test it a goodly amount before I used it in the product... maybe more than I had to, but the behavior was interesting. I didn't list its restricted input range when I posted it, but it shouldn't take a genius to figure out that a fixed-seed, 6 iteration Newton root will have limitations on input range and accuracy.

The shipped product has no known bugs. Well, some people are asking it to make some radical waveforms that it's not spec'd for, and complaining about absolute fidelity. That's an analog issue. I think that if we crank up the class-AB idle current, and warm up the room a little more, it will be OK.

I reckon the

I've written a few RTOSs, but they were fairly simple compared to GUI monsters. One nice thing about designing relatively simple products is that you can make them very, very good. That's more appealing to me than making a gigabyte thing that's crap.

This *is* sci.electronics.design, and most of the code we use is smallish embedded stuff. That's the part I want to get right and bug-free.

John

Reply to
John Larkin

Yet again: correct, but that wasn't my point. The re-ordering is limited because it has to be limited, as the processor cannot tell when the ordering matters and when it doesn't.

Most of the time, the ordering doesn't matter, but the CPU (and the compiler) cannot know that. Most of the time, the ordering is there for the sole reason that the language forces you to choose an ordering. You can choose "A then B" or "B then A"; you cannot simply choose "A and B in any order".

I'm not.

But there's a chicken-and-egg problem (or even a vicious circle) here, in that hardware is optimised for current languages and languages are selected (in part) for their optimality on current hardware.

Reply to
Nobody

Which is fine so far as it goes. Unfortunately, it can't do anything which is visible beyond the translation unit, so it can't reorder calls to external functions relative to each other or relative to load/store of (non-"static") global variables.

Reply to
Nobody

in

of

solutions.

formatting link

there are a zillion

languages that

omnipresent.

gcc and libc...

could not be done,

all of the sudden

by step way

FTP is restartable, which is very convenient. OTOH it has some firewall problems due to using two ports, and (more seriously) it relies on TCP/IP to provide reliable data transfers of arbitrary length.

TCP/IP uses a 16-bit checksum, which was really amazingly reliable in the 1980s but isn't adequate for transferring gigabyte files over unreliable networks. After all, you wouldn't want a corrupted pixel in the latest pirated movie extravaganza, now would you?

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal
ElectroOptical Innovations
55 Orchard Rd
Briarcliff Manor NY 10510
845-480-2058
hobbs at electrooptical dot net
http://electrooptical.net
Reply to
Phil Hobbs

Unless you try loading it into the debugger.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal
ElectroOptical Innovations
55 Orchard Rd
Briarcliff Manor NY 10510
845-480-2058
hobbs at electrooptical dot net
http://electrooptical.net
Reply to
Phil Hobbs

For the sake of anyone who doesn't already know this (and for pedantry), I'll point out that the compression used for video files means that any corruption would tend to corrupt multiple pixels in multiple frames.

It's worse for many general-purpose lossless compression algorithms, as a single corrupted byte can render all subsequent data unusable.

OTOH, most digital communication technologies have their own error correction at the link layer, so it's quite rare to see corrupted packets at the IP or TCP layers.

Reply to
Nobody

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.