Since when is not having a bug patentable?

ch

running on a different compiler wouldn't help, a divide of a signed number should not compile to a logic shift, it could be a compiler bug

Though I just tried compiling that tiny snip with all variables as signed int, on gcc for arm/cortex and it does produce a arithmetic shift

-Lasse

Reply to
langwadt
Loading thread data ...

Cargill

of

Uninitialized variables is pure programmer screw up. Even lint will = catch that. Other than that, COST ($), learning curve (another form of cost), programmer or more likely management resistance all factor in.

Are you ASSuming that it has not been maintained and improved in all that time?? Name a better FOSS tool.

Interesting.

=20

But of course, it costs to incorporate the fancies. They gotta get their money back somehow.

I heartily disagree. Teach that sloppy code hurts the programmer. = Better still, rather than teach some theoretical stuff, show the difference between well written maintainable code and crazy quilt mishmash shit = code. It is really easy with existing FOSS code examples. The more people that see the difference the better code we will get.

That is a management and training problem. Let it become a case of use = it yourself or let your boss find the crap first. See what happens then.

=20

And this is news?? Moreover, something over half of would have been detected it they had used the tools that they did have. Again a management and training problem.

I remember, i bought Cyrix chips for years after over that.

Me too. But my regression tests accumulate all previous bugs and their fixes and tests. Currently, new bug fixes often break previous bug = fixes. That does not get past my regression tests.

But of course. I especially include the daft things that i have done and remembered.

Bloody well agreed. Cyclomatic complexity and big O() algorithm time = cost are two of the very few useful theoretical concepts worth teaching. Modularization with cohesion and coupling are some more.

Reply to
josephkk

that paper because it

them 'programmers'

YOU NEED applying for a job,

Jimmy did to defend

hardware[1]?

simulation of reality POV

Can't the "heads" get a hotel room? Why even try to bring the peons closer?

Reply to
krw

of

If you are creating the code in the first place it is crazy not to make = it test harness drivable. Plus, M$Win and i am pretty sure both KDE and Gnome are setup in a way that provides test harness access through the = GUI (algorithmically do the [mis]points and [mis]clicks). It is not much fun writing the code for that though, but it's gotta get done, so do it = early. =46or non GUI (CLI) programs input and output redirection are standard in MVS. VMS, Unix, Linux, and MSWin platforms.

And of course the other things like coupling and cohesion and software state machines.

Reply to
josephkk

  • one billion!

I generally write GUIs as a facade-GUI with no actual functionality that drives an engine ( a console-mode app ) over pipes or sockets.

That way the "engine" part is scriptable....

Some times that's not all that possible...

Of course of course :) State machines r00l.

-- Les Cargill

Reply to
Les Cargill

more.

Used

fails,

I just wish i knew somebody that does consistently maintain the same = level of due diligence. I have tried and i don't. Every other programmer that i have worked with is the same. I have worked with about 10 closely and over one hundred enough to see.

?-)

Reply to
josephkk

But

had

Too terse today, hell yes, not so much for where it came from. Ambiguous not really; try checking the standard, most ambiguities (and there = weren't many even in the K&R days) have been eliminated now.

?-)

Reply to
josephkk

Not because of a built-in unwanted (and *wrong*) type conversion, though. Any type conversions would have to be explicit.

Reply to
krw

I saw what you said and I don't believe you.

If you are going to claim a defect of this type you had better say which compiler and full version number is affected so that someone can verify your claim independently and the manufacturers can fix it.

I am no expert on the C language standard but my understanding is that if all the other operands are 32bit signed int the manifest constant will be converted to a *signed* quantity and strength reduction for special cases like 1/2^n *must* then use arithmetic shift. See:

formatting link

under "arithmetic operations with integral literal data types."

It should not be. The standard is quite explicit.

Signed arithmetic should stay signed. OTOH if the lvalue destination was unsigned then you could get exactly this sort of mess.

There is a standard for C-compilers. I don't like the language for other reasons but you accuse the compiler of having a fault that I don't believe is present in any reasonable commercial compiler.

If there is a compiler bug then you should submit evidence to the manufacturer and get it fixed. This one seems such a basic error that I find it hard to believe that it has escaped everyone's notice. I think it is orders of magnitude more likely that your programmer cocked up and is blaming the compiler for *his* mistake and that you fell for this "explanation" hook line and sinker.

I don't believe you. Show us the minimal test code with variable declarations and disassembly with detailed version number of the compiler responsible and I might just consider the possibility of a gross error in a production compiler seriously.

Yeah. Right.

He was working at a distance and hand coding it into binary. It is actually amazing that the program he wrote only contained one minor bug. The fault was spotted by the guy who did the program setup after the long division I think it was failed to work first time.

--
Regards,
Martin Brown
Reply to
Martin Brown

We agree on so much that I will concentrate only on the disagreements.

The compiler is already doing it internally but not reporting situations where it has a pretty good idea that the generated code is not what the programmer intended. Modern compilers are better now - there were some very tetchy early C-compilers that could crash if fed code that was not syntactically valid. ISTR Missing brackets was one.

There was a time when running code through LINT was essential if you were using certain compilers. They didn't stay in business long...

The trouble here is that for undergraduate sized projects anyone with a decent aptitude for the subject can more quickly hack the code out of the solid with absolutely no attempt at good process.

Process really only becomes relevant when projects exceed about 2 man months of effort (and then gets exponentially more important with size). Very few undergraduate projects are anything like that size.

I generally run static code analysis against anything I am asked to work on. The customer always looks hurt when I turn up with a long list of latent errors in their codebase that also need fixing.

The suits have difficulty in knowing which way is up. They are mostly concerned with avoiding delays and shipping defective code early to maximise their quadratic sales bonus. You should know that already.

They certainly can't see that spending money on better QC tools would simultaneously improve quality and decrease risk. Final testing is always the thing that gets scratched to meet shipment deadlines.

But if they are not doing it right in aerospace where lives are at risk then there is little hope of commercial or shrink wrap doing it.

Of the major players in software I have to say that the gaming people have some of the best development process tools and frameworks. People playing games do not take kindly to BSODs and hangups. ISTR some of the games copy protection include suspension of gravity after a short while.

They were also a faster and a bit less power hungry. I put one into my Compaq transportable brick (an operation that took most of a day) of that era. Our software required FPU support to run.

The batting average is that an attempt to fix one bug will introduce an unwanted side effect about 50% of the time in the average shop. It is a sad statistic but it is still true and has been for decades. Failure to run regression tests leads to all sorts of difficulties later.

The trick is to look at the code and sus out what might be unguarded.

I also have tools to catch any crash, dump the stack and examine the system in a post mortem debugger (even for a remote crash). Provided that the MAP file and source for their version still exists each in service failure can be investigated and fixed.

CCI actually gives you the minimal spanning set of test vectors to ensure that every possible path through the code is executed at least once. I use a combination of CCI, DECISION-POINTS, LOC and DEPTH to identify at the outset code that is at serious risk of containing hidden faults when working on external projects (mostly what I do). (McCabe's CCI penalises CASE statements too heavily)

I think that they are taught this at least in CompSci but that the student versions of compilers don't offer this functionality. The next generation should be learning how to use these tools from the outset.

--
Regards,
Martin Brown
Reply to
Martin Brown

C certainly provides opportunities for signed/unsigned integer gotchas. But the situation that John claims has failed is not one of them.

Iff the compiler generated the code as claimed then it is defective. You cannot blame the language for that. My money is still on user error.

ISTR IBMs extremely concise, cryptic, quirky and very powerful APL language had its fair share of unexpected type conversion gotchas too. Conway's life program in APL is *very* short indeed.

formatting link

And the most strongly typed languages of all distinguish when doing the type conversion between numerical VALUE and BITPATTERN of the object.

So that in shorthand form and longhand i, j INTEGER32, r REAL32

i = INTEGER(r) = VAL(INTEGER, r)

Integer approximation of r under well defined rounding rules into i (or runtime overflow). Usually this is what you want.

j = CAST(INTEGER, r)

Stores the binary representation of the REAL32 r in the INTEGER. (no possibility of runtime error) And it fails at compile time if the two objects are different sizes.

Such languages also have runtime bounds checking built in for the debugging phase which can be turned off in the production code.

--
Regards,
Martin Brown
Reply to
Martin Brown

Of course GCC for ARM is defective.

You

I blame the language. There should be an explicit, unambiguous signed right-shift operator, like PowerBasic has. C was designed to be maximally terse.

Reply to
John Larkin

Wrong. We looked it up, and the results of this particular operation is "compiler dependant." The compiler is by definition correct, because there is no standard for what it's supposed to do.

Reply to
John Larkin

Apparently this is wrong, or perhaps you're calling John a liar?

ADA certainly wouldn't have made that mistake possible.

Since typing isn't something APL inherently does (the user normally forms the bits), it shouldn't surprise anyone that it changes internal data representations. OTOH, I don't think I'd use APL as an embedded language but perhaps you would.

So what?

Um, THAT'S THE POINT!!! C is dangerous.

Reply to
krw

References please. You have next to no credibility on this claim.

The code you wrote called for a division of two signed quantities. If the compiler takes it upon itself to generate a fast division using shifts it is incumbent upon it to honour the type(s) of the operands.

I still think this was user error.

--
Regards,
Martin Brown
Reply to
Martin Brown

More fool you then for using defective tools.

There is a right shift operator >> in C and it works.

Moreover it works on both signed and unsigned quantities as long as you don't try to shift by a number of bits that is negative or greater than the operand size. In that case results really *are* undefined.

There *is* a catch lurking here in that true signed integer division always rounds towards zero whereas arithmetic shift rounds always towards minus infinity so it is not identical to division by 2^N for signed integers when the numerator is negative.

However, this quirk of the arithmetic shift may well be protecting you from what was written. If it actually did the division as written then you would correctly get result 0 for numerators -4095 through +4095. From the context here you do really want ( ) >> 12

The problem here, if there is one at all is that the compiler used an unsigned logical shift operator for division on a signed quantity. It has no right to do this since the manifest constant it is replacing should have been promoted to signed int 32 as detailed in the IBM summary page (which is the only free ref of ANSI C I can spot online).

--
Regards,
Martin Brown
Reply to
Martin Brown

s

).

..

I would like to see where it says that

it is correct that a right shift of signed is implementation defined, but you had a divide

a divide must obviously produce the right result for both signed and unsigned numbers otherwise nothing would work, and I'm sure the standard says so

so if the compiler decide to use a shift to implements a divide it must use an arithmetric shift

I tried it on codesourcerys gcc compiler for arm and it compiler to something with an ASR

-Lasse

Reply to
langwadt

I think he is ignorant and arrogant in about equal measure. Liar is too strong. Misinformed would be closer to the mark.

Actually it would since the problem here boils down to a code generator fault which iff we take his claims at face value amounts to replacing a signed division by manifest constant 4096 (= 2^12) with a logical shift right by 12 bits that is *only* correct for positive argument values.

Note that even if it had generated the more nearly right arithmetic shift that has different rounding behaviour to the original source code. (unless an additional test for negative and add is used)

Either way this is a code generator error during optimisation.

I agree that C is dangerous, but in this case according to the info I have to hand the coding was fine it is the optimiser that is flawed.

It is extremely important that we can trust code optimisers not to wreck arithmetic expressions with invalid optimisations. Most compilers are exhaustively tested for these sorts of flaws before release.

I am not defending C because I like it as a language. I don't.

I am defending it from unwarranted attacks out of ignorance.

--
Regards,
Martin Brown
Reply to
Martin Brown

He's given you the evidence, yet you still say that what he's experiencing is wrong. No, you *are* calling him a liar.

No, he's already shown that this is UNDEFINED C behavior. Such a thing would NEVER happen in a strongly typed language - even if you wanted it to.

"even if it had"

So you admit that C is broken.

No, John has said that he found that this is undefined C behavior. Do you think such a disaster-in-waiting is "undefined ADA behavior"?

The LANGUAGE is flawed.

Your faith is *obviously* misplaced.

It's *NOT* unwarranted when such behavior is UNDEFINED. Hidden type conversions are just *wrong*.

Reply to
krw

ough.

s.

g is

haven't seen any evidence, John said it was so and several said it cannot be unless there's a bug in the compiler or something else was missed

evidence would be something like the definition of the variables, compiler version and asm produced

and when did pointing out something is wrong become the same as calling a liar?

You

t

would

dividing a signed type in C is not undefined, the direction of truncation may implementation defined but not division.

right shifting a signed number is implementation defined, but if the compiler wants to be smart and use a shift instead of divide it needs to produce the same result as divide

-Lasse

Reply to
langwadt

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.