In article , Elan Magavi writes
I did this many years ago.
In article , Elan Magavi writes
I did this many years ago.
-- \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/
This will work:
const unsigned char code CompileDateTime[] = __DATE__ " " __TIME__;
Scott
Yes. I use that.. I want to append the time and date to the output file name.
thanks
Elan
In article , Not Really Me writes
Or even
const char code CompileDateTime[] = __DATE__ " " __TIME__;
-- \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/
I may be being dense (surely not, I hear you cry) but why is the omission of the unsigned desired? I know it depends on compiler but I thought most defaulted chars to be unsigned anyway?
I'd have thought the first method was better because it explicitly states what's going on and leaves no ambiguity? Is the compiler likely to use signed ASCII values when populating the strings from it's Date and Time parameters?
"Tom Lucas" wrote in message news: snipped-for-privacy@iris.uk.clara.net...
Signed ASCII?
Either declaration works fine. There isn't any math done on strings.
Going off at a tangent here, but...
Am I the only one who thinks that having signed char be the default in C was the wrong design decision, and that C should have been designed so that if you wanted a signed character, then you should have to explicitly ask for it ?
Simon.
-- Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP If Google's motto is "don't be evil", then how did we get Google Groups 2 ?
"Tom Lucas" schreef in bericht news: snipped-for-privacy@iris.uk.clara.net...
Less typing, saves time at the keyboard ;)
The compiler doesn't care in this case, it just stuffs a string of ASCII characters in an array of sizeof(char) elements.
-- Thanks, Frank. (remove 'q' and '.invalid' when replying by email)
I thought the default was implementation defined.
In many cases, "implementation defined" was chosen to allow the compiler to pick the most natural implementation. It's not always convenient, but you get the highest performance for the default case, which is useful if you don't care.
First char may be either signed or unsigned depending on the implementation. Conceptually at least there are three 'character' types signed char, unsigned char and char. Second, some compilers and tools will complain about type mismatches when assigning a char array to either a signed or unsigned char. There is some benefit to having the storage type match the type every routine that is going to use it will use. The only real reason I know of to use (un)signed char is if you are using it as a small integer type rather than a character. OTOH it doesn't often make a difference in the generated code, it 'just' decreases the signal to noise ratio from your tools messages.
Robert
Neither "signed char" nor "unsigned char" is the default. This is an implementation dependent issue and a compiler writer is free to chose either option. A unsigned char would have been the wrong option for the PDP-11 architecture in which C was originally implemented, since chars are sign-extended by the hardware when loaded into a CPU register. Making them unsigned would have required to mask the values after each register load.
Roberto Waltman.
PS: From International Standard ISO/IEC 9899 Programming Languages - C
Annex J (informative) - Portability issues ...
J.3 Implementation defined behavior
A conforming implementation is required to document its choice of behavior in each of the areas listed in this subclause. The following are implementation-defined: ...
J.3.4 Characters
... Which of signed char or unsigned char has the same range, representation, and behavior as ??plain?? char (6.2.5, 6.3.1.1). ...
unsigned also avoids problems when passing the result to the functions, and ensures that EOF will survive those operations.
-- Chuck F (cbfalconer at maineline dot net) Available for consulting/temporary embedded and systems.
... snip ...
signed is NOT the default. It is implementation defined.
-- Chuck F (cbfalconer at maineline dot net) Available for consulting/temporary embedded and systems.
I agree with you. A char is clearly derived from the word "character" which has no sign. And after being bitten a few times, I always use names like uint8 and int8 (and wider variants), which are "typedeffed" in a separate include file.
Meindert
That's what it says in my copy of K&R.
-- Grant Edwards grante Yow! .. here I am in 53 at B.C. and all I want is a
Hmmm, so it does. That seems oddly contradictory.
Robert
-- Posted via a free Usenet account from http://www.teranews.com
In article , Tom Lucas writes
I asked the same question many years ago in the same situation....
There are three types of char......
"signed char" and "unsigned char" which are integer types. [Plain] char which is a character type.
The signed/unsigned chars should be used for integers and the [plain] char for character data.
The sign of the plain char is up to the implementation of the compiler. So if you used the unsigned char (an integer type) it may not be compatible with the [plain] char character type the compiler uses.
-- \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/
So, if I'm understanding correctly, Chris is using the conceptual "third type" of character by omitting the "unsigned" which is essentially specifying a "don't care" on the signing of the character.
Does the third type actually exist though? Surely in practice all compilers either define chars as signed or unsigned - normally with an option to specify which it will use. I can see why that might raise compiler warnings if you start mixing signs later on in the code but that would happen anyway.
As Meindert suggests, I always use a typedef header and specify all my types as schars or uchars and explicitly state their signing status. I always use unsigned chars for strings and characters and I've never had a warning for it but I guess that could just be the compilers I've used.
What is the real issue behind declaring char as implementation dependent ?
Is this some kind of Anglo-Saxon bigotry thinking that all printable characters will fit into 95 printable code positions or does this have something to do with IBM EBCDIC issues ?
Paul
I might be getting this wrong (if you want the full story, start a thread on comp.lang.c - after a couple of hundred posts, it should be clear), but I believe C treats plain "char" as the same type as "unsigned char" or "signed char" (depending on the compiler and command-line switches), while C++ considers all three as separate types.
I likewise use explicit typedefs when holding 8-bit data, which is very common in small systems. But I tend to use plain "char" when I really mean a character, such as for strings, as I consider a character to be a different type of data from a small number.
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.