problem using FILE pointer

"bitness" is a silly category for processors. Databus width certainly does affect performance. Does a processor with a maximum system clock rate of X, that requires doubling up the databus accesses because of "mini-bitness" sizes mean it performs the same as a different processor family at rate X that has the same databus "bitness" as its internal registers/ALU? Or even the exact same processor family; model Y of system clock rate X and databus of "half bitness" versus model Y2 of system clock rate X and databus of "full bitness". Of course not. But a better metric is a processor's Dhrystone results. Would you suggest that a processor with a half-databus compared with its "bitness" per your definition, would chunk out the same Dhrystone as the exact same processor architecture, model 2, with the same bitness of databus? If you do, *that* is ridiculous. "bitness" is marketing fluff. So let us not take up anymore comp.arch.embedded database space with this discussion thread; especially of the server supporting the database is running a "half bitness" processor ;)

********************************************* Jeff
formatting link
Reply to
JeffR
Loading thread data ...

This is only partly correct. In fact, the 68000 had 3 16-bit ALUs ... one unit for data operations, and *two* units used in parallel for address generation. Since the 68000 could perform address and data calculations simultaneously, certain instructions used all 3 units simultaneously.

I wanted to cite the Motorola architecture docs, but I wasn't able to find them on the web. However, for a summary of the internals see

formatting link

George

Reply to
George Neuner

I mean that there are code generator modules available which are not built into the binary distribution but which you can download and compile yourself into your own customized GCC version.

George

Reply to
George Neuner

You mean for example... avr-gcc -v Using built-in specs. Target: avr Configured with: ../gcc-4.1.2/configure --prefix=/usr

--mandir=/usr/share/man --infodir=/usr/share/info --target=avr

--enable-languages=c,c++ --disable-nls --disable-libssp

--with-system-zlib --enable-version-specific-runtime-libs Thread model: single gcc version 4.1.2 (Fedora 4.1.2-5.fc8)

Reply to
Dennis

Hmm The IBM System/360 Model 30 (mid 1960's 32 bit mainframe) had an 8 bit ALU. Of course that was back in the day of real microcode. And it was much more than a single chip.

Reply to
Dennis

I've no idea what you mean.

1) There is no such thing as "the binary distribution". The FSF distributes source code, and the FSF sources support a dozen or so architectures (including one you say has been removed). There are some architectures that are maintained outside the FSF source tree (e.g. MSP430, NIOS2, M16C, etc.). AFAICT, all targets start out that way and once they get ironed out they get added to the FSF source tree. I remember when quite a few of the current targets that are in the FSF sources were external. 2) You can only build GCC for one target architecture. Gcc supports more than one architecture. Therefore, for any given build it's a tautology that "there are code generator modules available which are not built into the binary... but which you can download and compile yourself".
--
Grant
Reply to
Grant Edwards

I think George is a troll.

--
Grant
Reply to
Grant Edwards

I once used a processor which I and most other people would call a 16-bit processor (16-bit registers, 16-bit address space, 16-bit data paths). However, it was built out of a set of AM2901 bit-slice processors. Since each of the AM2901 ALUs was 4-bits wide, I guess George would say that the CPU in question was a 4-bit CPU.

There were some pretty famous CPUs built using the AM2901 family: DEC PDP-10, DG Nova, AN/UYK-44, and so on. All of them

4-bit CPUs, presumably.
--
Grant
Reply to
Grant Edwards

Also, how should the 68008 be classified with 8 bit external data bus and 32 bit instruction set ? The 8088 had 8 bit external data bus but internally 16 bit addressing.

From the compiler code generator point of view, these are the same as their wider brothers, although the number of external address lines might be smaller, but this should not affect the code generator.

Paul

Reply to
Paul Keinanen

And an 8-bit path to memory, IIRC.

--
ArarghMail902 at [drop the 'http://www.' from ->] http://www.arargh.com
BCET Basic Compiler Page: http://www.arargh.com/basic/index.html
 Click to see the full signature
Reply to
ArarghMail902NOSPAM

"bitness" is often a useful categorisation of processors, but not necessarily in the way people think (often people here say they want a

32-bit processor, when they really mean they want a fairly fast one).

A processor's Dhrystone MIPS is a pretty poor way to measure it's performance - the performance is heavily dependent on what you task you want the processor to do, the toolset used for compilation, and everything external (such as the memory connected to the databus). It gives a rough indication of speed class, but nothing more.

Bitness is about *software* - it is about how wide data the processor can deal with directly. The width of an external databus is a compromise between speed and cost and physical size - it is not part of the processor, it is not part of the processor architecture, the instruction set architecture, or anything else in the core. It is no more relevant to a description of the *processor* than the size of its cache, or the device's support for SDRAM or DDR memory, or the number of timers on the device. Sure, it affects the processing speed (although a half-width databus device will typically run much faster than a half-clock full-width device with the same core). But that's just

*speed* - bitness is about *functionality*.
Reply to
David Brown

I think George has already made it very clear that he doesn't understand what gcc is, how it is built up, how it is developed, how it is distributed, how it is customised or ported by some developers, and how it is used by users.

I couldn't find any summary or overview on gcc's website - it seems to assume that every visitor knows what gcc is, and how it works. Do you know any useful links that could give a good summary for George (and others - he is not alone in being unfamiliar with gcc)? The best I could find is not exactly official:

Other useful links could be:

Reply to
David Brown

That's probably the best general overview there is. There are a bunch of links at the bottom of the Wikipedia article that provide more detailed bits of history.

--
Grant Edwards                   grante             Yow! DIDI ... is that a
                                  at               MARTIAN name, or, are we
 Click to see the full signature
Reply to
Grant Edwards

Ha Ha.

Bit-slicing is an implementation detail - what matters is how many bits are being computed in parallel.

I didn't want to get deeper into this discussion, but here goes ...

The problem is to specify the chip's capability and give some indication as to its performance (at least relative to other members of the same family). The ISA defines the chip's programming API. Registers generally coincide with the needs/wants of the ISA but there are significant exceptions.

- Ex. CM-1|2,T-2|4|8. These have no programmer visible registers. The CMs have arbitrary width integer instructions that take operand and result widths as parameters.

- Ex. Vax-11 has 32-bit general registers, but has 64-bit integer ops that use an adjacent pair of registers and 64 and 128-bit FP ops that use 2 or 4 adjacent registers (no dedicated FP registers).

- Ex. Am29050, like the Vax, has 32-bit general registers, but has 64-bit FP ops using an adjacent pair of registers. Somebody ;) is now going to object that most of these architectures are not relevant today. So what? The issue is how to describe chip capabilities and these examples and other show that there are issues with using ISA and/or register width to do that.

- A number of modern chips have an N x N -> 2N bit multiply, or even occasionally (N x N) + 2N -> 2N multiply/accumulate, producing results bigger than a register. Some have a special 2N-bit register for the result while others require a pair of registers to catch the result.

AFAICS, the ALU's bit width (total combined bit width if sliced 8) and its set of primitive operations are the only really objective measures of a chip's processing capability. For microcoded and CISC_on_RISC architectures (current i86(-64)) the ISA is a work of fiction far removed from the machine's primitive capabilities. Visible registers (if any) may be just convenient groupings of bits meant to coincide with the ISA.

George

Reply to
George Neuner

That's what we said about the ALU width(s) in the m86k family. It was an implementation detail that was hidden from the compiler.

No, that doesn't matter at all to somebody designing or building a compiler.

No, that's not the problem. The problem we're discussing is how to describe the "width" of a CPU in the context of compiler design and implementation. I don't care if a VAX CPU is running at 1Hz or 100GHz. I don't care if it does ALU operations 128 bits at a time or 2 bits at a time. The VAX CPU is a 32-bit CPU.

And that's what we're talking about in this thread.

In this thread, we aren't concerned with "a chip's processing capability" in any way other than the ISA as seen by the compiler.

--
Grant Edwards                   grante             Yow! Well, I'm INVISIBLE
                                  at               AGAIN ... I might as well
 Click to see the full signature
Reply to
Grant Edwards

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.