problem using FILE pointer

It's still illegal, and the compiler can do anything it desires. It is very simple to put a "return 0;" in the main routine.

--
 [mail]: Chuck F (cbfalconer at maineline dot net) 
 [page]: 
            Try the download section.
Reply to
CBFalconer
Loading thread data ...

Again, misinformation. getc and putc return EOF (a negative integer) on error or EOF. They are often the most efficient way to use the input system. The thing to watch out for with them is the fact that they can evaluate the FILE* parameter more than once. They are unique among the file handling routines in doing this, but it enables eliminating allocation of input buffers, without the disadvantages of not buffering. This behaviour only occurs if you let getc etc. be macros.

Yes, I used the word 'signal' in a cavalier manner.

--
 [mail]: Chuck F (cbfalconer at maineline dot net) 
 [page]: 
            Try the download section.
Reply to
CBFalconer

Seems wasteful to start with a call (wasted stack space) and have a non-reachable "return" instruction (wasted program memory).

And it's not "illegal" unless you have a hosted environment-- see ISO/IEC 9899:1999 (E) "5.1.2.1 Freestanding environment"

Reply to
Spehro Pefhany

... snip ...

Maybe so, but it seems to me there is no excuse for using pre-std compilers any more. That standard has been around for 20 years now, and the earlier public drafts for even longer.

Well, I refused to use C in those days. It was too easy to make fatal errors, and I am quite capable of that. My embedded work used Pascal and assembly. Pascal to the ISO standard, btw. I first looked at C for embedded with (not sure of the name anymore) around 1976. That was published in DDJ, was integer only, and compiled itself VERY slowly. I considered revising it to use my floating point packages, and other things, but decided against it (primarily for lack of a standard).

More reasons to use standard compliant systems. If, for some rare reasons, you are really forced to used such an ancient and faulty system, you should be well aware of those diversions.

--
 [mail]: Chuck F (cbfalconer at maineline dot net) 
 [page]: 
            Try the download section.
Reply to
CBFalconer

... snip ...

Good. I hope this exchange also impresses other people.

--
 [mail]: Chuck F (cbfalconer at maineline dot net) 
 [page]: 
            Try the download section.
Reply to
CBFalconer

What is illegal? Is it including the void or excluding it? Including a return statement or excluding it? Putting a superfulous return statement is the way to go?

Reply to
Everett M. Greene

Unfortunately, if you have to work with an old 8 or 16-bit chip or an integer DSP, you may find that the latest C compiler for it is still pre-ANSI. A lot of chips had compilers based on SmallC, PCC, on GCC

1.4 (the first really stable version) or on ACK 1.something. GCC didn't achieve ANSI compliance until 2.0 (~1993). I don't remember when ACK finally became ANSI but it was in the mid-90s. SmallC and PCC never got there.

Even well known 16-bit chips like 8086 and 68K, which were supported by the official versions of GCC, have now been dropped from the 4.x releases. You have to use 3.x versions for them.

George

Reply to
George Neuner

... snip ...

I didn't realize that. That was Stallmans biggest mistake, IMO. I mean ignoring systems with integers smaller that 32 bits.

--
 [mail]: Chuck F (cbfalconer at maineline dot net) 
 [page]: 
            Try the download section.
Reply to
CBFalconer

Since when is the 68K a 16-bit chip? All the ones I've ever used had 32-bit registers.

Huh? I just looked at gcc trunk at

formatting link

The following "non-32-bit" targets are still there:

m68hc11 avr (actually it's 8-bit) pdp11 h8300 (some sub-types are 16-bit) stormy16 picochip m68k (which _is_ a 32-bit architecture, and it's) I checked the stuff for the AVR (an 8-bit CPU), and it's got commits less than a week old.

I don't think it's true.

There are plenty of "less than 32-bit cpus" supported by gcc.

Reply to
Grant Edwards

The 8086 was never well supported by gcc - by the time gcc started to support multiple target architectures, 16-bit devices x86 were close to obsolete. Although DOS and Windows 3.1 (and parts of the Win9x family) used the 16-bit mode of the x86 devices, *real* computers running *real* operating systems used 32-bit modes. Others made branches of gcc for cores outside gcc's main target areas, including the famous DJ Delorie's port for DOS and early Windows, which had a 8086 backend.

The 68K support is a different story. The 68K family has always been

32-bit, not 16-bit. It has some 16-bit features - a 16-bit external databus, and the original 68000 used a 16-bit wide ALU (running twice for 32-bit operands). But these are minor implementation details, trading speed against cost and chip size. The instruction set architecture and basic register width are what counts - it was 32-bit from its conception.

The first target for the first version of gcc was the 68k, and the 68k family (now as ColdFires) is still a major target that is actively developed and improved. Most of the improvements are through generic gcc changes or ColdFire-specific changes, but they still affect compilation for the 68k.

Targets are dropped from gcc after going through a period of deprecation. If there are users or developers still interested in using the old targets, they will normally be kept alive - all a user needs to do is ask. Target support only gets removed if they really are obsolete, and if it would require significant effort to keep them (therefore they often get dropped at major version changes).

Stallman has made *many* big mistakes throughout his career (inevitable, given how much he has done), and this (if it were true) is far from his biggest. It's not even remotely near to the biggest mistake in the design of gcc (that would be the intentionally plugin-hostile structure, IMHO - something that is changing at the moment).

Stallman wrote gcc as a compiler for use on Unix-type systems. Unix-type systems all run with at least 32-bit processors. Therefore, gcc was aimed at targets with at least 32-bit cores. Quite simple, really.

Now, of course, there are gcc ports for a (small) number of 16-bit architectures, and at least one 8-bit architecture (the AVR). There may be other ports that are outside the main tree.

Reply to
David Brown

Settings\btp\Desktop\pertest\ecg.txt","r");

No, this is the correct suggestion but the fscanf return value needs to be checked. The problem is you cannot blindly call fscanf without checking the return. It should be:

if ( fscanf(fp,"%d",&ecg[j]) != EOF ) { ++j; /*this only works for a byte array; %d will push ints in the array; if ints are desired it should be j += 2 */ }

Then the !feof check will cause the loop to terminate. All is then good .. well sort of.

Another problem with this code is that the array is declared uint8, yet the fscanf format specifier is "%d". The %d will fetch enough characters to fill the standard word size of the processor represented by an int (e.g.

2 bytes), yet the declaration is a uint8. This will overflow the buffer when the last value is read and placed at the end of the array for most processor architectures, and cause an exception at runtime (if you're lucky!), unless the compiler is smart enough to complain, but I doubt it since it's a format specifier. The declaration should be (and the static initializer is a bad idea; if this is an embedded system, all that needs to be done is to force BSS to be initialized to zero, then remove the {0}):

unsigned int ecg[ecg_size];

and ecg_size set to this if bytes are desired:

#define ecg_size (1250 / (sizeof(int))

The write of this code really ought to spend some more time on it before posting the code here. There are many problems with it as it was posted.

Reply to
JeffR

be

characters

(e.g.

static

to

posted.

---> and ecg_size set to this if bytes are desired:

I meant:

and ecg_size set to this if ints are desired:

(1250 * sizeof(int))

and the increment of the index j should be:

j += sizeof(int);

Reply to
JeffR

No no ... 16-bit databus processors suffer in performance and are *not* minor details. The fact that internal registers were 32 bits and the ALU was 32 bits doesn't mean the processor *is* 32 bits. This is marketing fluff. Double up the databus accesses for 32 bits worth of data in 16-bit chunks and that's a big deal, especially if the processor has no cache. That's no different than saying the 80188 was a 16 bit processor. It certainly was not. It had an 8-bit databus that was double-cycled for

16-bit accesses.
Reply to
JeffR

Depends on how you look at it. The 68K had 16-bit ALUs. It used 2 ALUs in parallel to work on 32-bit data. The 68020 was the first in the family to have a 32-bit ALU.

There are legacy code generators and anyone can submit a new generator to the tool chain, but the GCC development team does not maintain unofficial targets. GCC has *never* officially supported any 8-bit device. The steering committee announced with v4.0 that no targets smaller than 32-bit will be officially supported.

They are slowly removing from the official release code generators for chips which are no longer popular (you can check this by comparing version manuals). That doesn't mean that you won't be able to find (or build) GCC to work with your legacy chip - all the old code generator modules are still available for download, they have just been reclassified as "additional" and are no longer included in the official release.

68K won't be dropped quickly (if ever) because it is a subset of chips which are still officially supported. But if your code doesn't work on a 68K, there's no one to complain to.

George

Reply to
George Neuner

IMO the ALU width defines the chip, but I won't debate that here.

Coldfire is only partly 68K compatible - it has the same instruction set, but not all instructions are implemented - in particular there are fewer addressing modes - so there is a major impact on compilation. Coldfires are more memory bound than real 68Ks and require higher clock speeds and bigger caches to get equivalent performance where the 68K could use more complex addressing modes. And v4,v5 Coldfires have an incompatible FPU so you'll get different answers than you would if you run the code on an 040,060 or on an earlier chip with 68881,2 coprocessor.

George

Reply to
George Neuner

This sort of argument turns up regularly - you can look up in the archives if you want, rather than starting a new battle here. I'm going to give a brief summary here of why the 68000 is 32-bit (and the 80188 is 16-bit), why your arguments here are completely wrong, and what other nonsensical values have been used for the "bitness" of a processor.

First off, "bitness" has nothing to do with performance. If you double the clock frequency (all other things being equal), you double the performance - that doesn't affect the "bitness". If you have a bottleneck that slows a chip down, it does not affect its "bitness".

In particular, the width of the a particular chip's external databus bears no direct relationship with the "bitness" of the process. The processor is *part* of a device, it is not the whole device - just as it is not the whole system, but part of the system. A 32-bit device can have a 16-bit databus (an example would be the 68332 - the core is virtually identical to the core of the 68020, but the external databus is 16-bit). A 32-bit processor with a 32-bit databus can be connected to an 8-bit memory. A 32-bit processor can have a 64-bit or 128-bit wide databus (not uncommon on high-end processors). A microcontroller with internal Flash may have no external databus at all. Thus the databus width is irrelevant when discussing the "bitness" of a processor.

ALU's are more integral to a processor core, so let's consider them. There are plenty of processors (mostly older or specialised designs) that have very narrow ALU's - the COP8, for example, has a 1-bit wide serial ALU. Yet the processor itself is 8-bit. The 68000 had a 16-bit wide ALU - 32-bit operations passed through it twice (it did not, as another poster claimed, have 2 16-bit ALUs working in parallel). High-end processors have multiple ALUs - that does not make their "bitness" larger. They also have extra-wide ALUs for SIMD or vector instructions - again, this does not given them a higher "bitness".

Some people (in particular, Microchip's marketing folk) like to refer to the width of the internal flash as the chip's "bitness". This is, of course, even less relevant than the width of the external databus, since it is only concerned with code and not data. Again, there are plenty of examples of microcontrollers with 32-bit cores connected to 16-bit internal flash, 32-bit internal flash, and 64-bit internal flash.

There are only two features that can be considered to be useful, consistent and realistic measures of the "bitness" of a processor - the maximum width of data that can be handled by most general instructions, and the width of the main general purpose register(s). These are almost always the same (I can't think of any counter-examples off the top of my head). For most processors, this also corresponds to the width of a C "int" (except for 8-bit processors, since a C "int" must be at least

16-bit, and for 64-bit processors, since many C models use a 32-bit "int"). This definition of "bitness" is the size that is relevant for software running on the processor, and is key to its ISA (instruction set architecture). Any other widths on the device are almost entirely irrelevant to software (exceptions noted below) - they may affect the speed of the device, but not its functionality (and thus are as irrelevant as the clock speed in discussions of "bitness").

Binary code for the 8086, using 16-bit data and 16-bit instructions, will run perfectly well on the 80188 - they have the same ISA, and are both 16-bit. Binary code for the 80386SX and the 80386DX are identical, and both are 32-bit processors despite the SX having a 16-bit databus.

All the 68k family, including the 68000 and its descendants the 68020 though to the 68060, the embedded devices like the 68332, and all the ColdFire devices, are 32-bit. This is because their data registers are

32-bit wide, and the ISA supports ALU and general purpose instructions up to 32-bit wide. The fact that some devices are faster at handling 16-bit data than 32-bit data is irrelevant, just as the fact that some devices have special instructions for working with 64-bit data (or in fact a multiple-move of up to 512 bits at a time). This is also easy to see from compilers - any C toolchain for the 68k devices will support all the original 68k devices and the ColdFires (at least, those available when the compiler was written!), all using 32-bit "ints", all all able to generate binary code that will run unmodified on all devices.

As I mentioned above, there is at least one point at which the width of external databuses may be visible from software - the maximum width of atomic accesses (including locked accesses or read-modify-write accesses) may be affected by the databus width. But that is always somewhat system-dependent.

There are also some processors that are not easily categorised. The Z80 is a prime example - it has an 8-bit accumulator, and many instructions are thus limited to 8-bit. But it also has 16-bit register pairs, and many general and ALU instructions can operate directly on these. The Z80 is probably best referred to as an 8/16-bit hybrid. There are also many DSP architectures that do not have a simple "bitness" width.

Reply to
David Brown

See my other post regarding the irrelevancy of the ALU width in discussing processor bitness. Also note that the 68000 had *one* 16-bit ALU, which was used twice for 32-bit operands - having 2 16-bit ALUs would be a silly idea, since a single 32-bit ALU would be far more efficient at almost identical cost.

formatting link

This would be news to the official gcc maintainers for the AVR (8-bit) and m6811 (8-bit) and m6812 (16-bit) targets that have been part of the main gcc tree for many years.

True.

Yes there is - the 68k family, which covers the original 68xxx processors and the ColdFire devices, is fully supported and actively developed by the gcc maintainers. Like other gcc targets, if you have support questions you can ask on the main gcc mailing lists, or on target-specific mailing lists, or contact the official maintainers, or get support through third-parties who provide support contracts. In this case, it is CodeSourcery who are the official maintainers of the ColdFire (and ARM, and various other) gcc targets, and they provide support ranging from free mailing lists to expensive but unlimited professional support contracts.

Reply to
David Brown

I look at as the width of registers and the natural "width" of assembly instruction operations. We're talking about this in the context of compiler support, and that's what compilers care about.

I don't care. Neither does gcc.

I don't care. Neither does gcc.

I don't care. Neither does gcc.

formatting link

So, a the target is in the official source tree and is being actively developed and supported, but it's still not "officially supported"?

You said that anything smaller that 32 bits had been removed from 4.0 and you gave the 68K as an example. The 68K is neither less that 32bits nor has it been removed from 4.0.

What do you mean by "official release"? They're still in SVN trunk.

--
Grant
Reply to
Grant Edwards

IMO you are completely wrong here - the width of the ALU

*implementation* is just an implementation detail. The width of data that the processor can pass through the ALU in an instruction, on the other hand, is fundamental to the ISA - and thus defines the bitness of the processor. So an ALU that happens to be physically implemented as 16-bit for cost reasons, but which works transparently with 32-bit data from 32-bit registers, is a 32-bit ALU in a 32-bit processor.

That is only sort of true. When designing the ColdFire, FreeScale (I can't remember if they were still "Motorola" at the time) looked at the

68k ISA, dropped some parts that were rarely used but cost a great deal to implement, and then designed a completely new implementation of the same ISA using a modern design. One of the features that got dropped was the more complex addressing modes - most of which were not much used, and many of which had already been dropped on newer 680x0 devices such as the 68040 and the 68060.

The missing address modes do not have a "major" impact on compilation, though of course it only takes a single unimplemented instruction to lose binary compatibility. The complex addressing modes were used on only certain types of code (in particular, complex data structures and array and pointer manipulation). They were also falling into disuse before the ColdFire - some had been dropped from the later 680x0 devices, and others were little used even though they were implemented, since code for the 68040 and 68060 was often faster if these modes were avoided (they caused pipeline stalls, and hindered the compiler from re-ordering instructions to reduce latencies).

The ColdFire's *are* more memory bound than the original 68k devices, but that is mainly because they execute far more instructions per clock cycle! The difference due to compact complex addressing modes being split into several smaller (but much faster) instructions, and therefore requiring more code memory bandwidth, is tiny. I'd be very surprised if code size increased more than a couple of percent between a ColdFire-optimised compilation and a 680x0-optimised compilation.

There are certainly plenty of instructions that exist on some 680x0 devices and not on the ColdFires, and vice versa, and also when comparing the different devices in each family (the 68040 has instructions that are not in the 68020, and vice versa). But these are minor points - the main ISA is the same across all the devices, and it is not hard for a compiler to generate code that will run on all of them (though less efficiently than if it can use extra instructions).

Reply to
David Brown

They're in SVN trunk and freshly downloaded 4.3.3 source tarballs. They're being actively maintained. What exactly is meant by "dropped from 4.x releases" and "not officially supported"? If they're still in SVN trunk and 4.x source tarballs and are being actively maintained, why should anybody care wether they've been "dropped from 4.x releases" and "aren't officially supported?"

I've got no problems with that. If nobody steps up to maintain something, then it goes away. I just can't figure out why you say that targets like the m68k "have now been dropped" and are "not supported" in 4.x, and why you "have to use 3.x versions for them."

I give up. What does "no longer included in the official release" mean?

I see.

m68K support "have been dropped from the 4.x release" but "won't be dropped quickly (if ever)".

--
Grant
Reply to
Grant Edwards

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.