Getting started with AVR and C

I misread part of your post as claiming that even without casts a compliant compiler would promote both 16-bit operands to 32-bits. I was attempting to point out that isn't true if an "int" is 16-bits (it's a not-uncommon misapprehension that gcc is only available with

32 or 64 bit ints).

Ah, that's not what I intended to write. AFAICT, everybody agrees with everybody else, we just aren't managing to express that clearly.

--
Grant Edwards               grant.b.edwards        Yow! People humiliating 
                                  at               a salami! 
                              gmail.com
Reply to
Grant Edwards
Loading thread data ...

That is indeed what I meant.

--
Grant Edwards               grant.b.edwards        Yow! Now KEN and BARBIE 
                                  at               are PERMANENTLY ADDICTED to 
                              gmail.com            MIND-ALTERING DRUGS ...
Reply to
Grant Edwards

Or on the TMS320C40, where char, int, long, long long, float and double are all 32 bits and all have a sizeof() 1. Trying to impliment any sort of communications protocol with that was fun.

--
Grant Edwards               grant.b.edwards        Yow! Is it NOUVELLE 
                                  at               CUISINE when 3 olives are 
                              gmail.com            struggling with a scallop 
                                                   in a plate of SAUCE MORNAY?
Reply to
Grant Edwards

avr-gcc does indeed work very nicely as long as you don't look at the code generated when you use pointers. You'll go blind -- especially if you're used to something like the msp430. It's easy to forget that the AVR is an 8-bit CPU not a 16-bit CPU like the '430, and use of

16-bit pointers on the AVR requires a lot of overhead.
--
Grant Edwards               grant.b.edwards        Yow! I wonder if I could 
                                  at               ever get started in the 
                              gmail.com            credit world?
Reply to
Grant Edwards

There's a key difference there. The implementation you describe could be fully conforming. The one he described could not; you can't meet the standard's precision and range requirements for double with a 32-bit data type.

Thanks for that information. Claims have frequently been made on comp.lang.c that, while the C standard allows CHAR_BIT != 8, the existence of such implementations is a myth. I'm glad to have a specific counter example to cite. There's something I've wondered about such machines: when data from other machines containing data types smaller than 32 bits (for instance, ASCII text files) is transferred to the TMS320C40, how is this usually handled? I could imagine three main possibilities:

a) Four 8-bit bytes of data are packed into each 32-bit byte b) Each field value stored in a data type smaller than 32 bits is converted to a 32-bit type, and stored as such. For instance, a file containing 45,678 8-bit bytes of text gets converted into a file containing 45,678 32-bit bytes of text. c) Different methods are used in different contexts, leading to constant headaches. This strikes me as the most likely possibility.

--
James Kuyper
Reply to
James Kuyper

Other problem with it is the separate program and data memory spaces. Fine for small deeply embedded things but started to show strain when I wanted a LCD display, menus etc. I would not use it for a new project unless there was a very good reason, ultra-low power perhaps. Cortex M3 is much nicer but the chips are much more complicated of course.

--

John Devereux
Reply to
John Devereux

Well, I thought we were talking useful people writing good code using C. Useful people who want to write good code using C don't always use correct "standardese", but they do know how to read all the pertinent documentation.

Had you gone back and read said pertinent documentation you would have done a "useful people" sort of thing, and realized what _my_ point was about, regardless of any misuse of terminology on my part.

But hey -- I'm just a guy who designs embedded systems that actually make money for my customers. One of the skills that requires is listening to people and not trying to slam them for violating some trivial rule of terminology when they're getting the gist of their statements right.

I believe that you need to spend some time with engineers who actually design product.

--
My liberal friends think I'm a conservative kook. 
My conservative friends think I'm a liberal kook. 
Why am I not happy that they have found common ground? 

Tim Wescott, Communications, Control, Circuits & Software 
http://www.wescottdesign.com
Reply to
Tim Wescott

Well, the case I cited had 32-bit doubles. Other than that, I think it was conforming. When trying to reuse source modules, it's shocking how many assumptions I make that aren't implied by the C standard.

If you look at some other DSPs, I think you'll find similar examples where everything is 16 bits (they tend not to support FP at all).

A TMS320 is a DSP, and I doubt there are any that have actual filesystems. OTOH, high-speed serial interfaces are common, and filling in things like protocol headers involves a lot of shifting/masking/anding/oring.

That's pretty common for data being sent to/from "normal" CPUs.

If you need to do any sort of string manipulation (which you try to avoid like the plague), that's what you end up doing.

Exactly.

And the icing on the cake is that the 32-bit FP represention isn't IEEE-784, so you also get to convert between external and internal FP representations also. Fun!

--
Grant Edwards               grant.b.edwards        Yow! My NOSE is NUMB! 
                                  at                
                              gmail.com
Reply to
Grant Edwards

And plain char *isn't* one of the "standard integer types", even though it's a standard type, and it's an integer type, and its characteristics (range, representation, and behavior) are identical either to those of signed char or to those of unsigned char, both of which are "standard integer types".

--
Keith Thompson (The_Other_Keith) kst-u@mib.org   
    Will write code for food. 
"We must do something.  This is something.  Therefore, we must do this." 
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
Reply to
Keith Thompson

Oops - I missed that. OK - a more accurate statement would have been that your example had no additional conformance issues that weren't present in his example.

--
James Kuyper
Reply to
James Kuyper

James Kuyper writes: [...]

Trivia: I've used a machine (Cray T90) with 8-bit char and 64-bit short, int, long, and long long. It had no 16-bit or 32-bit integer types.

[...]

They *must* have other names.

Normally such names would be identifiers reserved to the implementation, starting with an underscore and either another underscore or an uppercase letter. (Though I suppose an implementation that supports other forms of identifiers as a language extension could use them; for example some compiler permit identifiers with '$' characters.)

--
Keith Thompson (The_Other_Keith) kst-u@mib.org   
    Will write code for food. 
"We must do something.  This is something.  Therefore, we must do this." 
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
Reply to
Keith Thompson

I believe that C was implemented on the PDP-10. I didn't use it when I was programming the PDP-10 (I used assembly, then, and some other languages... but not C, until I worked on Unix v6 in '78.) But that was a 36-bit machine. And ASCII was packed into 7 bits so that 5 chars fit in a word. No one used

8, so far as I recall. That was the standard method. So I'm curious now what the C implementation did.

Of course, all that is prior to any standard. But it might be another case to discuss, anyway.

Jon

Reply to
Jon Kirwan

Using left/right shifts and AND and OR operations work just fine. Works OK with different CHAR_BIT and different endianness platforms. Do not try to use structs etc.

IMHO CHAR_BIT = 21 is the correct way to handle the Unicode range.

On the Unicode list, I even suggested packing three 21 characters into a single 64 bit data word as UTF-64 :-)

Reply to
upsidedown

...

Not if there aren't any. :-) I should have worded that differently. They're not required to exist, but if they do, you're right - they must have other names.

--
James Kuyper
Reply to
James Kuyper

I like it -- but it breaks as soon as they add U+200000 or higher, and I'm not aware of any guarantee that they won't.

I've thought of UTF-24, encoding each character in 3 octets; that's good for up to 16,777,216 distinct code points.

--
Keith Thompson (The_Other_Keith) kst-u@mib.org   
    Will write code for food. 
"We must do something.  This is something.  Therefore, we must do this." 
    -- Antony Jay and Jonathan Lynn, "Yes Minister"
Reply to
Keith Thompson

Except for self modifying code, why would one want data (program) access into program space (unless you are writing a linker or debugger) ??

While working with PDP-11's in the 1970's, the ability to use separate I/D (Instruction/Data) space helped a lot to keep code/data in private

64 KiD address spaces.
Reply to
upsidedown

As I remember the stories, the CRAY-1 had 64 bit char.

Yes the TOPS-10 file format stores ASCII as 5 characters to the word, but C can't do that. In a discussion some time ago about actual implementations, 9 and 18 bit char were discussed. The PDP-10 has instructions for operating on halfwords, which could be used, possibly with execute (XCT) to select the appropriate instruction, or for loops an unrolled loop. Or 9 bit chars using the byte instructions.

I keep wondering about a C compiler for the 7090, one of the last sign magnitude machines, and also 36 bits. It was usual to store six 6 bit BCDIC (more often called just BCD) characters per word. Also, 6 bit characters on 7 track magnetic tape.

The card reader on the 704, and I will guess also the 7090, reads one card row into two 36 bit words, ignoring the last eight columns. Software had to convert that into 12 characters in two words.

-- glen

Reply to
glen herrmannsfeldt

There are good reasons for self-modifying code space. The first and most obvious would be an operating system loading a program into memory. While the O/S does this, the memory is treated as data. A more meaningful example for small embedded applications, perhaps, is the ability to modify interrupt vectors pointing at code. If the processor refers to I space only for interrupt vectors, it may not be possible. And there are times when you have externally available code (large external serial-access memory (low pin count), for example, used to store code blocks infrequently needed and where the internally supplied flash just isn't big enough.)

In embedded, there are reasons. For operating systems, there are also reasons. And I am tapping only what is on the tip of my tongue and using no imagination, right now.

Jon

Reply to
Jon Kirwan

The "program" space was flash (non-volatile). The "data" space was registers and RAM (volatile). All non-volatile data (strings, screen templates, lookup tables, menu structures, and so) has to be in flash memory (IOW "program space"). It makes a _lot_ of sense to just use directly from flash instead of copying it all to RAM when RAM is so scarce.

But in a PDP11, Data space was plentiful, and constant data didn't also have to reside in Instruction space (because that's the only non-volatile storage you have).

On some parts there is some erasable non-volatile storage in data space. But, it's always scarce, and putting stuff there that is never to be altered is both wasteful and dangerous.

--
Grant Edwards               grant.b.edwards        Yow! I didn't order any 
                                  at               WOO-WOO ... Maybe a YUBBA 
                              gmail.com            ... But no WOO-WOO!
Reply to
Grant Edwards

Nobody said anything about modifying code space.

The "data" that's put in code space is never modified (at least not any any project I've ever seen).

It's not _modifying_ the progam space that's the issue (that is generally only done for firmware updates, where the entire flash is erased and reprogrammed).

Simply _reading_ program space _as_data_ is problematic. If you've got a lot of string constants or constant tables, you want to just leave them in flash (program space) rather than copy them all to (scarce) RAM on startup.

Now you need three-byte pointers/addresses to differentiate between data at 0xABCD in data space and the data at 0xABCD in program space. Three byte pointers is how some compilers solve that problem -- but I don't think avr-gcc does that.

--
Grant Edwards               grant.b.edwards        Yow! I think my career 
                                  at               is ruined! 
                              gmail.com
Reply to
Grant Edwards

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.