Fundamental C question about "if" statements

Whoops #1: I mostly program in C++ these days, and missed the introduction of stdbool.h. Bad me.

Whoops #2: 0 and 1 only? Good! This wasn't always the case, or at least about 30 years ago there was a compiler (Borland or Microsoft, can't remember which) that used 0 and -1 (or at least 0xffff).

Whoops #3: I was thinking of Walter, and your name dribbled off my fingertips. Oh well.

Thank you very much for _all_ the corrections -- I hate to be responsible for misleading people, so I appreciate it when people see my mistakes and nudge me back into line.

Reply to
Tim Wescott
Loading thread data ...

Any C compiler that targets an 8-bit PIC is not the best for learning -- at least not for learning C. The PIC architecture is a very bad fit to the C virtual machine, and as such a compiler writer is forced to choose between making a compiler that is not compliant to the standards, or making a compiler that generates hugely inefficient code.

I can't speak to the XC8 compiler, but the C18 not only wasn't compatible, it pretty much required you to do Really Bad Things in order to get the most efficient code (this, by the way, is the same problem exhibited by the 8051 -- it's a totally different architecture from the PIC, but it misses the C virtual machine by a similar-sized mile).

Given that you can get ARM Cortex-M0 parts that are nearly as small as the smallest 8-bit parts, and are nearly as cheap (I think they get down to $0.75 or less in onsies from DigiKey), I don't see any reason not to use a part that's a better fit to the language.

Reply to
Tim Wescott

(snip on 10 No, the result of a relational operator is always 0 or 1. The compiler

(big snip)

People would notice that one pretty fast.

But many Fortran compilers, I knew if back to DEC Fortran IV compilers, do that as an extension.

Fortran has a LOGICAL type that normally doesn't convert to/from INTEGER (or other types). Some compilers allow it, and -1 isn't unusual. (They convert the bit pattern. It is then not so obvious which value is .TRUE. and which .FALSE. in an IF statement.)

PL/I uses bit strings, '0'B and '1'B as boolean values, and convert to numeric 0 or 1 when needed.

Personally, I like 0 and 1 better than 0 and -1. Note that C allows for ones complement where all bits 1 is negative zero. Do be careful with that one.

-- glen

Reply to
glen herrmannsfeldt

And C18 managed to accomplish both of those at the same time. XC8 is much more standards-compliant.

Reply to
John Temples

Microsoft's OLE (1990) defined type VT_BOOL as a 16-bit value with VARIANT_TRUE = -1 (0xFFFF) and VARIANT_FALSE = 0.

The original Visual Basic was designed concurrently with OLE and enshrined that usage. From OLE it carried over into COM and eventually into dotNET. It's no longer defined as 16-bit, but under the hood the value of "true" still is -1.

Every compiler that supports OLE or COM/ActiveX, and every dotNET language compiler has to deal with this. In "safe" code, C# allows _converting_ [not casting] boolean to integer: the conversion is the moral equivalent of (x == true) ? 1 : 0 so that the result is what C(++) programmers expect. From unsafe code you can see what the actual value is.


Reply to
George Neuner

Cobol has done something like that for years with "implied subjects". While for simple cases it's fairly clear:

IF X > 10 AND < 20 THEN...

But weird and difficult to understand cases quickly arise:


In case you're wondering, the last term is "A < D", *not* "A NOT < D".

OTOH, if the NOT is on the implied operator, it *does* propagate:


Where the second term is "A NOT > C".

Interestingly(?) the rules actually changed fairly significantly, I think in Cobol-68 - the older rules were even weirder.

In general the rule is to avoid implied subjects like the plague.

One thing that Cobol did largely prove, is that "natural language" makes for a lousy programming language.

Reply to
Robert Wessel

Yes, when I wrote that the compiler couldn't understand such an expression, it is just that a C compiler can't understand it. As noted by other people, Python certainly /can/ understand it.

But I don't think it is a good idea for languages to support such things, except perhaps if they are specifically aimed as mathematical tools. People can get carried away in believing the code they write is "normal" maths if the language is too smart at its interpretation - it is clearer, and therefore easier and safer, if there are simple and absolute rules involved.

Agreed - in a strongly typed language, this would be an error. (Though note that C++ is a strongly typed language with a proper boolean, but due to backwards compatibility with C, the result of the "

Reply to
David Brown

The PIC chips have some good points - they are extraordinarily robust. I have worked with a customer's card that had a 85C qualified PIC, and they were looking for ways to push it beyond the 160C or so where it currently stopped working. And once Microchip starts delivering a PIC device, they never seem to obsolete them - you can still buy devices that are 15 years old or more.

But they are a horrible CPU to work with - in C or in assembly. At least with assembly you know you are working with something device-specific, so it is less of a surprise.

Reply to
David Brown

I don't expect to change your opinion at all, but I have not the slightest doubt that C99 lets you write clearer, simpler, more efficient, more readable and more logical code than ANSI C, just like ANSI C was a big step up from K&R C. The first edition of The C Programming Language was about K&R C - this should never, ever be taught. The second edition is from 1988, and covers most of what was just becoming ANSI C (the differences between ANSI C, C89 and C90 are minor). That's better, but the world has moved on from 1988.

For students of computing history, it is interesting to look at K&R C to see why modern C is better. But for people wanting to learn C for real use, it's about as much use as teaching medical students about blood-letting and prayers to Isis and Osiris before discussing antibiotics.

(Okay, that last comparison was a bit exaggerated, but I hope you get my point.)

Reply to
David Brown

Following your advice on compiler warnings paid off - another little bug fixed!


Reply to

_Bool in C99 onwards is not quite as strong as bool in C++, but it is not bad - and better than using an int or unsigned char in C (since converting to a _Bool is guaranteed to give 0 or 1). But _Bool is one of the reasons why C99 is a better language than ANSI C. (It was one of the C++ features that C99 copied.)

The "true" result of a relational operator is guaranteed to be 1 in C11, and I think also in C99. I can't say if it had the same guarantee in earlier C standards. And whether or not the standards say 1 is the value for true, compiler implementations have not always been loyal to that.

What is certainly true is that for people who made their own boolean types (pre-C99), there has been a certain amount of variety in the values they used for true and false. 1 and 0 are common, as are -1 and

0, but sometimes people have used different values. Just to be really entertaining, some people used 0 for true and 1 for false.

I believe - but cannot confirm - that there are other people involved in commercial embedded compiler development that also follow this group without making their presence known. If that means that these compiler developers listen to some of the problems and desires of embedded developers, then I guess that's a good thing. But I think Walter is the only open, active member of this group that contributes to discussions and makes it clear that he is a compiler developer. He is always worth listening to.

Usenet is a self-correcting medium. The fact that we can correct each other is what gives us the freedom to write these posts, knowing that is unlikely that our mistakes will do much damage - thus we can give good advice without worrying too much about the risks of errors.

Reply to
David Brown

Glad to help. If more people used gcc warnings properly, there would be a lot less bugs in the world!

I'd also recommend -Wextra, and looking through the list of additional warnings - there are many that can be useful.

Reply to
David Brown

It's also very resource limited by today's standards.

The PIC18 has one thing going for it and that is it's available in PDIP so it's easy to breadboard them.

Until the PIC32 came along, it was also the only MCU range to have a USB device capability in a PDIP format so that made it of natural interest for some people wanting to do USB work.

I've played with the PIC18 in the past and I _really_ disliked it, but it was the only viable option for PDIP based USB device MCUs at the time so I stuck with learning it instead of tossing it on the scrap heap where it belongs.

I designed a library at the time which was designed to be portable but needed to include the PIC18, so I made some decisions I wasn't happy about even at the time and which made the code more cumbersome than it needed to be. This was mainly due to the limited resources on the PIC18.

The irony is that I never even got the library actually ported to the PIC18 due to other things coming up. :-)

I'm now revisiting that work for another reason and it didn't take me long to decide to dump the PIC18 style API for that library and to redo the API on the assumption that more resource rich MCUs are available instead.

The reason I am mentioning this is to suggest to the OP that they rethink the choice of PIC18, even for hobbyist use, in 2015.

BTW, does anyone actually use PIC18s for new production quality projects these days ?

[In this discussion, I'm treating the PIC24 as an extension of the PIC18 but also as something that comes across as an evolutionary dead-end. The PIC24 is even more poorly supported than the PIC18 for open source or hobbyist work; at least the PIC18 has SDCC available for it.]


Simon Clubley, 
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

Thanks I'll do some reading on Wextra.

Reply to

I've seen PIC18's in commercially made medical/dental chairs (controlling tilt/angle etc) and in some small speed controllers.

I guess "quality" is in the eye of the designer! ;)

Reply to


(6.5.8) relational-expr: shift-expr relational-expr < shift-expr relational-expr > shift-expr relational-expr = shift-expr

IOW, relational operators are left-associative: a

Reply to

No doubt. But still - I think it's somehow important to understand the evolution of the language.

I would not use the first edition.

I would not think so; no. 1989 is now 36 years ago. I would agree that that is far back enough.

The idiom:

void x(y) int y; { }

is clunky enough to die right 20 years ago. I probably didn't even spell that right.

To clarify I do mean the second edition or better.

I do not care for the belief that there needs to be one way to teach something for "newbies" and another for the advanced class.

Besides, the K&R book is a signal example of technical writing.


I might have actually have used that *to teach* medical students, partly to make a point about people using appeals to bad statistical analysis as if they were prayers to Osiris.

We do see a fair amount of despair-inducing examples of this in real life.

Les Cargill
Reply to
Les Cargill

That's fair enough. But the ideal book for learning modern C would cover C99 as the main text, encouraging things like bool, types, mixing declarations and statements (to get minimal scope), and so on. A discussion of the limitations of C90 should be a single chapter, also including examples of the horrors that were pre-C90 K&R C, while a discussion of C11 might be two or three chapters. (C11 won't take more than that, because there are not many changes.)

It is good to understand the history of any subject we try to learn, but we don't need to study a whole book before getting to the current material of interest. Someone really interested in C, rather than just wanting to get something done with the language, should definitely read The C Programming Language at some point - but it should not be part of a main "learn to program in C" course.

I am not sure K&R C had "void" - but I believe you've got it at least roughly correct. There were no prototypes, no checking, no automatic type conversions - but then, at the beginning at least, the only types were "int" and a floating point type.

It is reasonable to say that all students of C should start in a similar place - what distinguishes "newbies" from "advanced" is the amount they have learned, and the experience they have. But having both groups start "at the beginning" does not mean starting at the historical beginning.

I agree on that - but it is irrelevant to someone merely trying to learn a bit of embedded C programming. It makes the book worth adding to the bookshelf, but it does not make it useful for teaching modern C programming in embedded systems, because it does not cover modern C programming /or/ embedded systems.

If someone is learning to write technical documents with LibreOffice or FrameMaker, the TeXBook and the LaTeX Document Preparation System are not an obvious choice for learning material. Once the student gets more knowledgeable and experienced in the subject, those two books are worth reading - both for their examples of technical writing, and for the typesetting knowledge within. But they are not books the students should start with.

I was explaining this to one of my kids recently (since he'd read an article about ancient Egyptian medicine in a history magazine), which is why I thought of the example. There is definitely worth in showing such things as examples of how not to do medicine - my point is merely that you can do that in a book chapter or a single class, rather than making the first term's study about learning those prayers and incantations before then throwing them out for something more useful.

Reply to
David Brown

I would suggest that the OP might want to learn C first in a hosted environment rather than an embedded one.


It's not a a problem with "complexity". The problem is that the C language grammar defines exactly what (a < b < c) means. It means ((a < b) < c).

Compilers do not generate code that does what you intend. They generate code that does what you write.

Yep, that's what the express (a < b < c) means in C. Some languages (e.g. Python) allow chained comparison operators, and in Python (a < b < c) means ((a < b) && (b < c)).

Grant Edwards               grant.b.edwards        Yow! I'm a nuclear 
                                  at               submarine under the 
 Click to see the full signature
Reply to
Grant Edwards

I was writing a lot of Pascal back in the day and the C programmers were proud of the fact there were no prototypes or type checking - such things were for wimps. I found it humorous that these were required in C++.

Reply to

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.