Had an interview

In message , Peter Dickerson writes

It is quite possible that both A and B will equal B

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H
Loading thread data ...

defining

It may look a bit outdated now... I came across a russian translation circa 197x from an earlier english book somewhere in 1987. My then girlfriend (she was a radio-chemicist) used to use it as a stand for a tea-pot, in order to do not harm a kitchen's table...

The original text title is (if my memory serves me correct, and please take into account that I am back translating) "Errors and pitfalls in Fortran programming". Don't remember who is the author, sorry...

So it was dedicated to Fortran of that time. This may be used as an excuise to why the (2) did not use

C construction. The same with "x ^= 3;", since Fortran did not define bitwise ops in a standardized way...

Cheers,

Andrew

Reply to
andrew.nesterov

I don't know why you call it "solution", since it's obviously incorrect.

That would be a beginner/inexperienced programmer.

A "programmer" would most likely opt for the '?:' operator in this case.

In reality that would the solution for a _good_ programmer.

--
Best regards,
Andrey Tarasevich
Reply to
Andrey Tarasevich

Better yet and less and less understandable :

x = x%2 + 1;

Reply to
Lanarcam

x^=3; // explain wtf this is doing right here Best regards, Spehro Pefhany

--
"it's the network..."                          "The Journey is the reward"
speff@interlog.com             Info for manufacturers: http://www.trexon.com
Embedded software/hardware/analog  Info for designers:  http://www.speff.com
Reply to
Spehro Pefhany

Maybe faster would be x =3D (x & 1) + 1; Rocky

Reply to
Rocky

Spehro Pefhany schrieb:

Toggeling the lowest two bits...

It's a WTF not to understand that this toggles just the two lowest bits.

Ok - I agree toggeling two bits can mean anything. It always depends on the context.

The magic happends if you deeply understanand that an innocent xor is half of an addition (the carry/overflow part) and half of an multiplication (in galois-space at least) at the same time, along with the consequences that this implies.

XOR rocks! I've optimized quite a bit of arithmetic with it, and it wasn't always obvious. I sat there more than once in the middle of the night, starred at a truth-table only to find out that x = A ^ (B+1) or something similar was the fastest way to express an arithmetic expression.

Love it! Excellent! Binary arithmetic can be so beautiful.

Nils

Reply to
Nils

I vote for: if (1 == x) x = 2; else x = 1;

which is clear, and has no undefined states. Self-healing code.

--
 [mail]: Chuck F (cbfalconer at maineline dot net) 
 [page]: 
            Try the download section.


** Posted from http://www.teranews.com **
Reply to
CBFalconer

Changing a 2 to a 1 and a 1 to a two obviously ;) I have noted though that besides trying for the most obscure code to prove their worthiness to program no-one has actually verfied the inputs are in range.

Robert

** Posted from
formatting link
**
Reply to
Robert Adsett

So who writes this one?

x = x["0\02\01"];

Reply to
nospam

And? I don't know about everyone else but I find the constant checking of prerequisites an annoying distraction when working on code and a waste of time and space in final production code. I can understand some extra sanity checking in debug code but these should be in moderation and preferably protected by #ifdef DEBUG or whatever.

If function f(x) is defined over the domain x = {1,2} and is documented as such then there is no reason for f to check that the parameter is a member of this domain. It is the responsibility of the caller to ensure that this is the case and f is entitled to return whatever it likes (or indeed not return) if this is not so. Of course, for debugging it is useful to know straight away if x is something different so a debug test may be appropriate but it should be precisely that - a debug test and not in final code.

I have seen too many times code where the caller does a series of sanity checks before calling the function which then repeats those same checks. Does that strike you as remotely sensible? That isn't smart or intelligent design - it actually indicates a poor design process because you haven't made a decision as to where the tests should be - in essence the function interface is poorly documented regardless of any statement that may have been made about the input domain.

Why place the burden on the caller? Well to begin with what if the function returns a basic type - how do you indicate the error condition in general? Also the caller has a much better understanding of the circumstances in which the fucntion is being called. It is often possible to reason that the caller simply cannot pass an invalid value to the callee simply by the way it works.

I am not diminishing the importance of testing _external_ inputs here - that is indeed necessary. But constantly checking the results of other elements with your own program is a complete waste of time. It doesn't matter how far you go, ultimately you have to trust that the rest of your program works as intended.

--
Andrew Smallshaw
andrews@sdf.lonestar.org
Reply to
Andrew Smallshaw

I don't think we are tremendously far apart really. I just observed that no-one considered the possibility that the numbers might not be in range. No asserts, no comments about acceptable range, nothing. That combined with the search for obscure techniques I find a telling cultural observation.

We could go on about the details of when and how to error check and I'm not a paragon of checking either.

However, the fact that no one considered that the inputs could be out of range (and for that matter your reaction to the observation) is an interesting commentary on how we view this practice we call programming. We seem to default to the assumption that the input values to our current problem will be in the range we expect.

I wouldn't be surprised if obscure coding techniques and insufficient range checking go together. And to be fair I think most contributions to the obscure examples have been tongue in cheek.

Robert

** Posted from
formatting link
**
Reply to
Robert Adsett

... snip ...

So you didn't bother reading my reply?

--
 [mail]: Chuck F (cbfalconer at maineline dot net) 
 [page]: 
            Try the download section.

** Posted from http://www.teranews.com **
Reply to
CBFalconer

The assert() macro (or a homebrew equivalent)) offers a reasonably concise way to achieve that, and can be considered to be a way of documenting the pre- and postconditions of the code.

My experience is that a bit of defensive programming saves far more time than it costs. Not only because catching errors early can save days of debugging and (re-)testing, but more importantly because it forces one to think about the pre- and postconditions and make them explicit.

Reply to
Dombo

Agreed. I encouraged the use of assert() on this very group only a couple of days ago. It has a few things that your typical range checking code doesn't. Firstly of course it compiles to nothing with a flick of a compile time switch. Also when an assert fails the action is to crash and burn. That is good because the function interfaces do not need to allow for the possibility of an error occurring.

This is one of the main problems when testing prerequisites in a language such as C that lacks exceptions - there often isn't any natural method of indicating the error to the caller, which forces yet more logic there to test for the error condition that was picked up by the callee. That is one reason why it is usually better simply to test the prerequisites in the caller in the first instance.

That is an argument for better specification and documentation, not for redundant code in the final product. If there is ambiguity in what input is valid for an particular function then that says more about design processes and project management than it does about the quality of the code in question.

--
Andrew Smallshaw
andrews@sdf.lonestar.org
Reply to
Andrew Smallshaw

It depends on your error strategy. Which error strategy is most appropriate depends on the application. For some applications the log, crash and burn strategy is preferable over muddling along in an inconsistent state with unknown consequences. In other applications it may be better to just continue and hope the customer will never notice.

Ideally one would be able to prove that there are no errors in the code instead of adding runtime checks (being it at the side of the caller or the callee). Source code analysis tooling like Coverity, Grammatech and Polyspace should render those runtime checks unnecessary. I have no first hand experience with any of these tools, but I'm interested in hearing from anyone who has.

Even with perfect specifications and documentation in place you still have to deal with people who may have misread, forgot or just interpret things differently. Training people to make their assumptions explicit (e.g. by using assert()) makes it easier to detect deviations from the design specification, identify gaps and ambiguities in the design specification and it triggers people to check the design specification.

Of course this wouldn't be necessary if all specifications were perfect (they rarely are, even official specifications, like those from ISO, often have ambiguities), and if all people would perfectly adhere to the specifications. Unfortunately I live in an imperfect world; neither the specifications nor the people (including myself) are flawless. The best way to deal with that is to assume errors will be made at any stage of the process and try to catch (and correct) them as early as possible. Finding the right balance however is the real challenge.

Reply to
Dombo

I had to check back to make sure I had, my apologies for forgetting it. Your solution does at least limit the range of outputs. It does leave open the question of whether values other than 1 or 2 are acceptable inputs.

At least someone is considering other inputs may occur.

Robert

** Posted from
formatting link
**
Reply to
Robert Adsett

No, look again. It accepts all inputs, but only outputs the values

1 and 2. It was:

if (1 == x) x = 2; else x = 1;

or similar.

--
 [mail]: Chuck F (cbfalconer at maineline dot net) 
 [page]: 
            Try the download section.


** Posted from http://www.teranews.com **
Reply to
CBFalconer

You mean:

x = "\00\02\01"[x];

--
Best Regards,
Ulf Samuelsson
This is intended to be my personal opinion which may,
or may not be shared by my employer Atmel Nordic AB
Reply to
Ulf Samuelsson

Another error strategy is to continue on, using the best possible failsafe state, and resume normal operation as soon as the fault is corrected.

Consider the microcontroller in an electric stove: when the temperature sensor on the rear left burner fails, what is the best strategy:

  1. Log the error in some internal EEPROM and shut down the entire stove.
  2. Try to use the temperature sensor reading anyway even though it is clearly out of range.
  3. Cycle the heating element at a 50% duty cycle hoping that the user will never notice.
  4. Let the other three burners and oven operate normally, and force the rear left burner to 0% until its temperature sensor is fixed.

For those of us who still want to eat, choice #1 is not an option, and for those of us who don't particularly enjoy the taste of burnt mashed potatoes, choices #2 and #3 are out as well.

My experience has been that there are way too many programmers who believe that #1 is OK, and #2 is...um...well... not their fault because it's a hardware problem.

It's bad enough that network routers and cellphones have reset buttons, I just hope I never see the day when cars have reset buttons.

--Tom.

Reply to
Tom

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.