LPC900/80C51 Compiler Toolchain

Free-as-in-speech, not (necessarily) free-as-in-beer. Contrary to popular FUD, Free Software is not about preventing commerce.

"Correct at will..." :-)

What difference does that make?

Huh? Do you mean:

There are many cynical non-believers who are not helping in order to keep the faith, but are instead helping because they have other objectives. These same cynical non-believers don't care if FOSS sinks or swims.

If this is what you meant then ...

"The enemy of my enemy is my friend" comes to mind.

Regardless of their objectives, these people must find some advantage in helping GCC. That's OK with the Free Software people and well within their "belief system."

--
Michael N. Moran           (h) 770 516 7918
5009 Old Field Ct.         (c) 678 521 5460
Kennesaw, GA, USA 30144    http://mnmoran.org

"So often times it happens, that we live our lives in chains
  and we never even know we have the key."
The Eagles, "Already Gone"

The Beatles were wrong: 1 & 1 & 1 is 1
Reply to
Michael N. Moran
Loading thread data ...

Whatever that means. As long as the source code remains open that's great!

Chris, that's just crap. Perhaps you should follow the development mailing list snipped-for-privacy@gcc.gnu.org for a while and get a clue, instead of spewing nonsense like that.

--
Michael N. Moran           (h) 770 516 7918
5009 Old Field Ct.         (c) 678 521 5460
Kennesaw, GA, USA 30144    http://mnmoran.org

"So often times it happens, that we live our lives in chains
  and we never even know we have the key."
The Eagles, "Already Gone"

The Beatles were wrong: 1 & 1 & 1 is 1
Reply to
Michael N. Moran

To give you some idea the US military used that strategy in Afghanistan and trained Al-qeada and others.... Look where it got them!

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
/\/\/ chris@phaedsys.org      www.phaedsys.org \/\/\
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris Hills

Are you suggesting that "these people" are actively sabotaging GCC?

--
Michael N. Moran           (h) 770 516 7918
5009 Old Field Ct.         (c) 678 521 5460
Kennesaw, GA, USA 30144    http://mnmoran.org

"So often times it happens, that we live our lives in chains
  and we never even know we have the key."
The Eagles, "Already Gone"

The Beatles were wrong: 1 & 1 & 1 is 1
Reply to
Michael N. Moran

You're entirely right. It is basic support though, not comparable with the amount of effort that went into the Thumb-2 backend in the ARM compiler.

Wilco

Reply to
wilco.dijkstra

I'd be surprised if the number of paid contributors is larger than the unpaid ones, or are you counting employees of companies whose main business is not open source? How many companies are there whose main business is developing or maintaining GCC? Do they pay competitive rates to hire top compiler experts? Big businesses have their reasons for contributing, but most have their own commercial compilers already

- and that is where much of the effort goes.

If that was true I would expect GCC to be far better than it is today.

__irq issue on ARM that went unfixed for quite a while (numerous people encountered that one), and I think the register allocator still has problems in generating many unnecessary move instructions since a change several years ago. In a commercial environment these would be "must fix before release" kind of bugs.

Other things like -O0 generating ridiculously inefficient code and emitting frame pointers when few compilers do so today do not instill a professional image. I once read a paper that showed a 5% codesize improvement on ARM by changing a few defaults. Again, that was a few years ago, has it been implemented yet?

It would be good if GCC was developed more like a commercial compiler indeed. Maybe that is what will happen in the future, but I don't think it is anywhere near yet. GCC may have lots of fashionable optimizations but I'd prefer stuff to work reliably and efficiently first.

Wilco

Reply to
wilco.dijkstra

In news: snipped-for-privacy@m36g2000hse.googlegroups.com timestamped Mon, 25 Jun 2007 22:28:05 -0700, snipped-for-privacy@ntlworld.com posted: "On 25 Jun, 15:50, "Michael N. Moran" wrote: > snipped-for-privacy@ntlworld.com wrote: > > And this is where I think commercial development has the > > advantage. Compiler vendors find and pay the best people > > to do whatever it takes to make the compiler as good as > > possible (with the idea that this large investment will > > pay itself off later). I don't see this kind of > > dedication in open source compilers, as developers > > usually don't get paid for their work and so don't > > attract the best people. >

Even if all people to contribute to GCC are all being paid to do so, Wilco Dijkstra's point still valid: a dedicated team being paid to work in a fulltime manner to make a proprietary compiler will probably try with quite good success to make a quite good compiler. (Exceptions have existed.)

">

Many people who contribute to GCC are not listed on any of those webpages. "I'd be surprised if the number of paid contributors is larger than the unpaid ones,"

I expect that few of these people are unemployed. They may be paid to do something, and they may deem GCC to be useful for performing some of their jobs' duties so may choose to use some of their paid time to contribute to GCC.

" or are you counting employees of companies whose main business is not open source?"

Of course such employees are counted.

" How many companies are there whose main business is developing or maintaining GCC? [..]"

I suspect very few. "> While GCC work *may* be done by anyone, serious > development and maintenance of this cornerstone of > Free Software is mostly done by paid skilled professionals, > whose employers understand the value of the GCC. If that was true I would expect GCC to be far better than it is today. >From what I've seen issues can take a long time to fix. [..]

[..]"

Patches for AVR-GCC can take over a year to get through to the main GCC repository, chiefly because the active maintainers of the time of the AVR-GCC port did not have permission to write into the main GCC repository and most people who did have permission did not care. As mentioned by the lesser inactive of the two official AVR-GCC maintainers of the time on

formatting link
:"[..]

Generally, they (GCC peoples) don't bothered about AVR port. It's just an 8-bits microcontroller. They are right.

[..]"

Regards, Colin Paul Gloster

Reply to
Colin Paul Gloster

As an example, so that Chris can understand the principle involved, Code Sourcery sell gcc toolchains for the ARM and the ColdFire (and possibly others). You have three options - free downloadable versions, paid subscription versions (which lead the free versions by about 6 months, and come with better hardware debugger support and integrated Eclipse setup), and professional subscription versions (which come with full support contracts). Each is available in windows and linux versions, both as easily install binary and source. So Code Sourcery makes money out of selling open source tools along with support contracts and added extras. This money pays for their business, and it pays the salaries of their programmers - who work on gcc (and related tools). Code Sourcery is the official maintainer of the ARM and ColdFire ports, and if you look at the changelogs of gcc you'll see their names scattered over wide ranges of gcc.

This gives the customer a wide choice of how they want to work, and how much they want to pay for it. You get everything from free to top-quality commercial service, and you can choose from compile-from-source to gui install and IDE, and there is a solid and serious commercial company behind it all.

There are, of course, lots of other major companies backing gcc and paying developers (Intel, AMD, IBM, Atmel, Red Hat, Novel, etc., etc.) - Code Sourcery is just one example.

Reply to
David Brown

In news: snipped-for-privacy@phaedsys.demon.co.uk timestamped Mon, 25 Jun

2007 14:51:06 +0100, Chris Hills posted: "[..] [..] AFAIK SDCC has no simulator/debugger"

SDCC does have its own simulator and debugger. They however were not integrated very well by default with the C debugging information from a version of SDCC (sic) from 2007: the debugger would show assembly instructions near but away from the instructions which were really being stepped through.

One of the compilers which is apparently supported very well by the 8051 simulation and debugging facilities of BoxView IDE is SDCC:

formatting link

Reply to
Colin Paul Gloster

Why wouldn't I count those whose main business is not open source? Many have an interest in having their products supported by GCC, and so they invest.

Do you have evidence to the contrary? I suppose you could e-mail and ask them. However, my impression is that the GCC maintainers are well respected. I have been following the snipped-for-privacy@gcc.gnu.org mailing list for years, and I can tell you that my perception is there are plenty of compiler experts that guide and contribute and plenty of others to do the more "mundane."

Or perhaps they have found that having their own compiler is unjustified when they could instead simply invest in the community and have a comparable or better product by drawing on a larger expertise.

Apparently you've had a bad experience. My experience has been much better. Are there bugs? Sure. Have I seen bugs in commercial compilers? Sure. I have received a

*very* fast bug fix for GCC H8. Compiler's have bugs.

OK, GCC is not perfect. What compiler is? And yes, I have used GCC for ARM, building the Linux kernel and many user-space applications on two different ARM platforms without issues.

Uhhh... -O0 is turning all optimization off. Why would you expect efficient code?

In the "words" of my son ... idk ;-)

Unlike commercial compilers, GCC supports a huge number of targets including 64,32,16 and 8 bit processors. Evolving an infrastructure capable of doing this is time consuming, and as a result GCC *may* not always have the absolute best code generation for any particular target, but the compiler is passionately maintained and is constantly evolving.

--
Michael N. Moran           (h) 770 516 7918
5009 Old Field Ct.         (c) 678 521 5460
Kennesaw, GA, USA 30144    http://mnmoran.org

"So often times it happens, that we live our lives in chains
  and we never even know we have the key."
The Eagles, "Already Gone"

The Beatles were wrong: 1 & 1 & 1 is 1
Reply to
Michael N. Moran

The companies that hire fulltime staff to work on GCC are often just supporting their own products (eg. a backend for their proprietary CPU) and don't improve competing targets or GCC as a whole. Few large companies hire fulltime staff to improve the core of GCC, especially if they already have their in-house compiler. If resources are constrained, which is going to win?

That is true for smaller companies who cannot afford to put a full compiler team in place. I know GCC is very popular with startups. However when you dig deeper many would create their own compiler if they could afford it as they are not that happy with the code quality they get. I do not believe that if you want comparable quality to commercial compilers that GCC would ultimately be a cheaper option.

I wouldn't call it a bad experience. I am simply used to the best compilers as I've worked on them and improved myself. What I'm saying is that GCC looks bad if you compare it to commercial compilers. A while ago I wrote some optimised C code for a client and found that GCC produced 40% larger code... If you don't care about this then you can be perfectly happy with GCC.

Sure, I'm not expecting it to be perfect or even as good as commercial compilers. But open source advocates often claim that they can go in and fix bugs much faster than in a commercial environment. This is simply untrue in most cases - if anything, the timescales for bugfixes in GCC are worse. Of course if you *pay* for a support contract then your experience may be better.

Turning off all optimizations achieves what exactly? Usually the goal of -O0 is fast compilation and generate code that is easy to debug. Turning off all optimizations does not achieve either goal. It may be counter intuitive, but compiling optimised code is often faster than compiling unoptimized code. When I debug code, I'd like to see local variables in registers rather than being distracted by all the spill code.

Agreed. I hope it does improve further.

Wilco

Reply to
wilco.dijkstra

Companies with a particular interest in the performance of gcc for a given backend will support improvements to that backend, and to gcc as a whole, as that's what benefits them. You are correct that they have little interest in improving other backends, but front-end improvements help them too.

I'm sure Altera, Xilinx, and Atmel, amongst others, appreciate you referring to them as "startups" or implying they have gone for the cheapo option because they are unwilling or incapable of "digging deeper".

Of course, they may perhaps have actively chosen to work on gcc ports on the basis of past successes, expected future successes, value for their investment in time and money, customer pressure, and supported source code (such as a linux port to the architecture in question). In particular, it is extremely unlikely that both Altera and Xilinx would have made such total commitments to their gcc ports (there are, as far as I know, no non-gcc compilers for their soft processors) if they thought that a non-gcc compiler (in-house or external) would be significantly better. Their competitiveness runs too deep to miss out on such an opportunity - especially if, as you claim, it would be cheaper overall.

The answer is quite simple - don't run with all optimisations turned off if you want some optimisations turned on. Like you, I prefer some minimal optimisations, such as variables in registers, when looking at the generated code. Normally I'd always use -O2 (or -Os, depending on the target), but for some debugging -O1 is convenient. No one who knows what they are doing compiles with -O0 on any compiler, gcc or otherwise.

mvh.,

David

Reply to
David Brown

The embedded code I have written is mostly C code, and I very rarely look at the assembler code. But that comment caught my attention because I thought that almost all compilers, particularly 32-bit ones, use stack frames and with frame pointers - or am I interpreting that comment incorrectly?

Regards,

Paul.

Reply to
Paul Taylor

I'm possibly showing my naivity here - I'm certainly not on expert on these matters, but isn't there a step in the compilation process just before optimisations (after tokenizing/parsing) that gets an internal representation of the source code (RTL with the GCC?), and where that step in the process is essentially a "dumb" part of the process, with the really clever bit, the optimisations, occurring *after* this step?

If so, then a compiler writer surely is going to need access to this unoptimised code at least for unit tests (or whatever gcc uses)? In which case -O0 is how you get at it. Unoptimised code is always going to be ridiculously inefficient, and you really wouldn't want to use it. Again I'm not an expert on the compilation process, but the above is my understanding as of now - please enlighten me :-)

Regards,

Paul.

Reply to
Paul Taylor

If the machine uses a stack, and the compiler keeps careful track of the state of that stack, it can generate SP relative addresses. However this normally requires other restrictions on the generated code.

--
 
 
 
                        cbfalconer at maineline dot net
Reply to
CBFalconer

I'm not sure what "optimized C code" is. Source that is optimized for one compiler may be pessimized to another. BTW, 40% is a bad experience. ;-)

Yep.

Maybe.

If your debugging in assembler mode then I suppose that's reasonable, but at the source code level it wouldn't be an advantage.

Roger

--
Michael N. Moran           (h) 770 516 7918
5009 Old Field Ct.         (c) 678 521 5460
Kennesaw, GA, USA 30144    http://mnmoran.org

"So often times it happens, that we live our lives in chains
  and we never even know we have the key."
The Eagles, "Already Gone"

The Beatles were wrong: 1 & 1 & 1 is 1
Reply to
Michael N. Moran

BTW, about 30 years ago, when I did that for the 8080, I thought I was breaking new ground. Tweren't so.

--
 
 
 
                        cbfalconer at maineline dot net
Reply to
CBFalconer

Most compilers stopped using frame pointers a long time ago. They are inefficient and don't actually provide any benefit. Rather than changing SP repeatedly inside a function, SP is adjusted only on entry and exit of the function, further improving efficiency. This also makes it easier to track stack variables in debuggers (as offsets from SP are fixed). The drawback is that stacksize can grow in some circumstances. Functions containing alloca or C99 arrays could still use a frame pointer.

Wilco

Reply to
wilco.dijkstra

OK - you have lost me..... :-)

My understanding is that a frame pointer only gets set up at entry of a function and is restored at exit? And variables are easy to track because with a frame pointer offsets to the variables are fixed?

Regards,

Paul.

Reply to
Paul Taylor

Yes, that's exactly the point of the frame pointer. It gives you a fixed base so that local variables have constant offsets from the FP, while the SP may change during the function as things are pushed or popped onto the stack (contrary to Wilco's wild generalisations, the SP

*is* adjusted during the function execution - if another function is called which requires parameters on the stack, then it is very likely that the SP will be changed, especially on processors with push and pop primitives).

However, since the compiler knows (hopefully!) what code it has produced, then at any given time the frame pointer is a fixed offset from the stack pointer. Thus you can save a little of the function prologue and epilogue, as well as freeing an extra register to play with, if you access the stack as offsets from the stack pointer, rather than having an explicit frame pointer (using "virtual frame pointer", if you like).

There are several situations where frame pointers are still useful, however. While compilers are generally now clever enough to keep track of the "virtual frame pointer", debuggers are not necessarily so - some find the frame pointer useful, especially if they don't have full debugging information about the code in question. On some processors, such as the AVR, there is little or no support for (SP + offset) addressing modes - a frame pointer in a pointer register solves that problem. And sometimes (as Wilco suggested) there is not quite such a neat relationship between the stack pointer and the frame pointer, such as after using alloca() or variable length local arrays. Finally, a frame pointer can make the code shorter or faster for some types of code

- exceptions, gotos, or other jumps across large parts of the code may be best implemented with a frame pointer, so that individual branches can manipulate the stack pointer (for function calls) while the exception code still knows where everything is. Even with simple branches, a frame pointer may let the compiler pile up on the stack without bothering to clean up after function calls, leaving the tidying to the epilogue (which uses the frame pointer to clear up the stack). That might or might not be smaller and faster - it depends on the architecture and the code.

In most practical cases, however, the best code is generated without using a frame pointer.

mvh.,

David

Reply to
David Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.