Developing/compiling software

Just goes to show how varied user experiences can be. I've had consistently good luck with Codewarrior(68K systems), IAR (ARM systems), and ImageCraft (MSP430). I've also used GCC for an ARM system without major problems---except having to learn linux and support either another computer or a virtual machine.

Mark Borgerson

Reply to
Mark Borgerson
Loading thread data ...

You can use gcc cross-compilers under Cygwin. But to be honest, it's probably easier to run Linux on a VM.

--
Grant Edwards                   grante             Yow! My pants just went to
                                  at               high school in the Carlsbad
                               visi.com            Caverns!!!
Reply to
Grant Edwards

This HAS changed recently, with Cypress Claiming this: [" Cypress also offers fully functional, free compilers with no code size limitations for both the PSoC 3 and PSoC 5 device families."]

["Cypress already has agreements in place to offer free compilers for both the PSoC 3 and 5 families. The Keil CA51 Compiler for PSoC 3 and the GNU GCC-ARM compiler are bundled with Creator, which also includes a debugger to support on-chip debug and trace functionality"]

My understanding is they now have a Full Keil, with reduced optimize choices, (and somehow Cypress Centric?)

This high price seems to follow the Embarcadero (ex Borland) selling model. ie Target the deep pocket corporates, but have some fringe tools, that are somewhat usable.

PC comments : PC tool flows ARE quite useful for embedded work, for algorithm development

Here, Embarcadero has Turbo Delphi, which is free, but is also crippled enough to not compile many web projects. Meanwhile, MicroSoft has quite usable versions of their C and Basic flows, (which DO compile most web projects...) and there are a number of other free PC tool flows.

HiTech also had a legacy version of their compiler free, IIRC ? Not sure of the current availability, perhaps on a mirror somewhere ?

My advice to a novice, would to be to worry less about the compiler - (as Silicon IS getting cheaper and larger all the time), but to focus more on the Debug tool chain.

Unless you are intending to produce 10,000+ units into a price- paranoid market, then simply choosing the next-larger variant, is a better solution.

You are always better to start with a more capable device and shrink to the final application, than the other way around.

Atmel have an OSD in release, and the Silabs offering is mature and usable, and Silabs have very good ToolStick level development choices.

The Cypress Debug flows look to extend things, as they claim a TraceBuffer I have not seen volume PSoC3 prices, but their web prices are comfortably under $10

-jg

Reply to
-jg

It will be interesting to see how that attitude will change at TI now that they have bought Luminary Micro and swallowed their Cortex M3 line. Luminary Micro supported a broad range of compilers pretty much equally (Keil, IAR, Code Red and CodeSourcery) with their evaluation kits, libraries, and example software. They also support FreeRTOS, uIP and lwIP (their ROM-based boot loader uses uIP, AFAIK).

I don't know how much of that attitude will spread from the Luminary Micro group at TI to other groups (you're probably thinking mainly of the msp430). But for the Cortex M3 devices anyway, the idea is that the customer gets to choose which tools suit his needs, rather than which ones TI happens to like dealing with.

Reply to
David Brown

One point that is missing here is that the tools situation varies widely according to the processor architecture. In particular, there is a huge difference between C-friendly architectures (>= 16-bit, plenty of registers and pointers) and C-unfriendly architectures (8-bit, few registers).

In the world of C-friendly processors, there is not a huge difference between major compilers for ordinary code generation. I've looked at tools for the PPC and the ColdFire - there are not many percentage point differences in the code size or speed for ordinary code between tools like Green Hills, CodeWarrior, and gcc. You certainly couldn't justify a purchasing decision based on the code quality. One thing that /can/ vary is the support for generating code that takes advantage of additional features such as DSP-type processor extensions. Libraries, debugger, profiler, IDE, wizards, etc. are other distinguishing features, as is language support (C and C++ standards and extensions). Much depends on how well the toolset fits the way you like to work.

For these sorts of processors, gcc is very much a realistic choice for professional users seeking the best tools. CodeWarrior has compiler options to enable gcc language extensions - that's an indication of the relevance of gcc in that market.

You also see smaller players building toolchains around gcc - they have gcc as the compiler, but use their own library, debugger, and other tools (possibly their own IDE, or just using Eclipse like many others).

Anyone who still thinks gcc is "nowhere near as good as the top commercial compilers" for processors like ARM, PPC, ColdFire, MIPs, x86 is either many years out of date, or has his head in the sand. Argue about the benefits of commercial support, integrated solutions, certification, libraries, etc. if you want - that is where there are real distinctions in the tools. But not the compiler.

For small cpus, the situation is very different. Getting the best out of a processor like the 8051 or the COP8 is far from easy. For the COP8, there are relatively few users, all of which are serious professionals. Writing a good compiler for the device is a specialised job. Whereas for larger devices, much of the compiler software can be shared across different targets, on something like the COP8 there will be a higher proportion of target-specific code. This means that writing a COP8 compiler takes a lot of time and money for a very small user base, and that really boils down to commercial tools at a professional-only price point.

The 8051 is an oddity in this. It is a C-unfriendly device and the professional-only compiler (and associated tools) from 8051-specialist Keil gives the best code. On the other hand, it is very popular and there is a perfectly reasonable open source compiler. The generated code might not be as compact (especially in data usage) as with Keil's compiler, but that's not a problem for all users.

One important thing to consider in this is how things will change in the future. The trend is towards 32-bit micros - they are getting smaller, cheaper and lower power, and pushing out the 8-bit devices (16-bit is now almost non-existent, except for the msp430 which is squarely in the

8-bit market). Use of 8-bit devices will become less common for new designs (except for customers looking for huge volumes, of course), especially in the case of C-unfriendly architectures, making life difficult for the more specialist tool vendors. I suspect we'll see more development tools based on gcc but with specialisation and differentiation in libraries, debuggers, other tools, and support (there are already half a dozen such tools for ARM variants). How then will the big commercial developers react? I hope they adapt and that we don't see acquisitions or bankruptcies - choice and competition is important for end users.
Reply to
David Brown

For hobby or small usage, you can typically just buy a device with more memory. So what if it costs a dollar more - it is easier (and therefore faster and therefore cheaper) that trying to make sure your code is within such small size limits. So what if SDCC takes 10 KB when Keil can put the same program in 6 KB - if you have a 16 KB device, it doesn't matter. Code size is not always a big issue.

People are always working out figures like that. Occasionally, they will actually come up with a figure that is realistic, but even then it is not applicable to anyone else.

To get true figures like that, you'd need to actually do the work in parallel with two different development teams, each of which being large enough to make valid statistical comparisons about the abilities and experiences of the two teams. No one is going to do such a comparison - at least, not for a 5K cost or saving.

So the numbers are based on guestimates and figures pulled out the air. You ask a developer to look briefly at the tools and he says it will take me a week to get up to speed with Keil, and two weeks for SDCC - the bean counter concludes that Keil will save a weeks worth of development time.

I don't mean to say that these numbers are wrong for the customer in question, just that for other potential users, the quote is barely worth the pixels its written on.

Reply to
David Brown

Which package is that? PK51 costs more, but if don't need their RTOS and debugger, then check CA51.

Reply to
vladitx

Definitely. I played with one of their M3 eval kits, and they provided a pre-built linux-hosted gcc toolchain and they seemed very friendly towards open-source toolchains and non-MSWindows uses.

Yes, I was. Their nasty attitude towards open-source JTAG debugging in particular. TI even went to the extent to plant moles in the open source community (TI employees who didn't use TI e-mail addresses when interacting with the public and attempted to hide the fact that they were TI employees.)

The interesting thing about the MSP430 tools is that TI pushed people pretty hard towards IAR -- even to the extent that the FAEs would tell people not to use TIs CodeComposer tools but rather to use IAR's.

The IAR tools themselves aren't bad, but their licensing terms suck (the usual hassles with dongles, license managers), and the only host they support is MS Windows.

--
Grant Edwards                   grante             Yow! NANCY!!  Why is
                                  at               everything RED?!
                               visi.com
Reply to
Grant Edwards

In message , Grant Edwards writes

My god you are sad. Most companies insist that employees use private email addresses and not company ones when conversing on newsgroups and forum lest anything they say be taken as being said on behalf of the company.

Your paranoid suggestion that TI was planting "moles" is a sad reflection on you.

Then again what is your email address? snipped-for-privacy@invalid.invalid Hmmm not a TI mole by any chance? OK so you do have another email address inthe sig.

Tell me is Mike Sowada happy with you/visi making these accusations about TI?

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

We're supposed to believe that things TI employees say in regards to TI products _aren't_ being said on behalf of the company?

A futile attempt to avoid my e-mail address from being harvested mechanically.

I don't think my ISP cares one way or the other about my opinions on TI's interaction with their customers. Neither does the post office or the phone company, in case you're curious about them.

--
Grant Edwards                   grante             Yow! I smell a RANCID
                                  at               CORN DOG!
                               visi.com
Reply to
Grant Edwards

So you don't email from your work account either.....

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

When I'm dealing with customers, vendors, or anything regarding products of my employer, I do.

For miscellaneous usenet postings containing my personal opinions, I don't.

--
Grant Edwards                   grante             Yow! MMM-MM!!  So THIS is
                                  at               BIO-NEBULATION!
                               visi.com
Reply to
Grant Edwards

The GNU toolchain can be OK, and it can be horrible. If you look at ST's home page you will find some discussion about performance of GCC-4.2.1 on the STM32.

The rumoured 90 MIPS becomes:

wait for it...

32 MIPS...

With a Keil compiler you can reach about 60-65 MIPS at least with a 72 MHz Cortex-M3.

Anyone seen improvement in later gcc versions?

... On the AVR I noted things like pushing ALL registers when entering an interrupt. The IAR is simply - better - .

The gcc compiler can be OK, as shown with the AVR32 gnu compiler.

BR Ulf Samuelsson

Reply to
Ulf Samuelsson

If you are designing products which are produced in large volumes you have to look at the code generation of open-source vs commercial because you can save a lot of money, by selecting another chip with less memory. BR Ulf

Reply to
Ulf Samuelsson

When officially representing the company

And neither do the TI employees

You are complaining they do exactly what you are doing

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

Just how are they measuring these "mips" and can we see some asm code output evidence Everything else is just supposition ... ?.

The assembler output is the final arbiter. Optimisation and code generation can only work with what you give it and depends a lot on the C constructs and code layout used. That is, you need to understand and work cooperatively with your compiler to get the best results. Gcc 68k produces pretty well optimised code as it's been around for a long time. Have been working with Renesas 80C87 on and off for a couple of years now. The assembler output would be difficult to improve on. Often half a dozen lines of C produces about the same number of lines of assembler output. If there is a major difference, is it perhaps because Cortex is still fairly new in gcc terms, or is it because there are not enough people interested enough / have the free time to improve it ?.

Hardly rational - I think you need to look at a lot more than single point to decide on "better". I used IAR on H8 projects some years ago and found the toolchain, shall we say, a bit eccentric. It worked and produced good code afaik, but was quite limited in terms of command line switches, options, utilities etc. That is, there wasn't much added value in the package and nothing like as good as Microtek's 68k offerings. It's probably much better now, but compare that with the gnu toolchain, which is not just gcc and binutils, but a whole raft of other utilities, all of which work seamlessly with a linux, unix or windows development environment and cost nothing other than the time to set it up.

As for registers, many of us came from an assembler background, but the world has moved on and modern micros have more than enough grunt to get the job done without worrying about how many registers are being saved. The whole idea is that you can now afford to write eveything in C without having to examine the entrails to optimise the code. Issues like no view of the big picture, poor system design and partitioning have much more impact than any tool efficiency issues, imnsho. Pushing a few or all registers makes how much ? - a few microseconds difference at most. Irrelevant in practical terms. If the architecture is being pushed so far close to the ragged edge, it suggests that you are trying to do a "mission impossible" project, or didn't size up requirements properly in the first place. Must be out of date on this as well - since when did vanilla gcc provide support for interrupt handlers ?.

If you want a better gcc for your architecture, don't just criticise, join in and contribute. I'm just gratefull that there is so much good open source code out there and free to use...

Regards,

Chris

Reply to
ChrisQ

GCC output is very literal and therefore very slow when optimisation is turned off - it does ok with higher optimisation. With regard to code size, by default it does not remove unreferenced code whereas commercial linkers do. With a few command line options you can get the code size very close to the commercial guys.

I'm not offering an opinion on the quality of GCC - just pointing out a few facts. The best thing to do is not believe anything you read and instead try it out for yourself.

Also, when using an 8051 squeezing out every last instruction can be important. If it is really that important on your new designs then basically you chose the wrong CPU (I don't want to start another thread about supporting legacy systems!).

GCC and IAR compilers do very different things on the AVR - the biggest difference being that IAR use two stacks whereas GCC uses one. This makes IAR more difficult to setup and tune, and GCC slower and clunkier because it has to disable interrupts for a few instructions on every function call. Normally this is not a problem, but it is not as elegant as the two stack solution for sure. GCC is very popular on the AVR though, and is good enough for most applications, especially used in combination with the other free AVR tools such as AVRStudio.

--
Regards,
Richard.

+ http://www.FreeRTOS.org
Designed for Microcontrollers.  More than 7000 downloads per month.

+ http://www.SafeRTOS.com
Certified by TÜV as meeting the requirements for safety related systems
Reply to
FreeRTOS info

Could you provide a link to this? I could not see any such discussion.

I note that gcc-4.2.1 was the CodeSourcery release two years ago, when Thumb-2 support was very new in gcc. And if the gcc-4.2.1 in question was not from CodeSourcery but based on the official FSF tree, then I don't think it had Thumb-2 at all. It is very important with gcc to be precise about the source and versions - particularly so since CodeSourcery (who maintain the ARM ports amongst others) have target-specific features long before they become part of the official FSF tree.

I would be very surprised to see any major ARM compiler generating code at twice the speed of another major ARM compiler, whether we are talking gcc or commercial compilers. To me, this indicates either something odd about the benchmark code, something wrong in the use of the tools (such as compiler flags or libraries), or something wrong in the setup of the device in question (maybe failing to set clock speeds or wait states correctly).

If there was consistently such a big difference, I would not expect gcc-based development tools to feature so prominently on websites such as ST's or TI (Luminary Micros) - a compiler as bad as you suggest here would put the devices themselves in a very bad light.

I haven't used the ST32 devices, but I am considering TI's Cortex-M3 for a project, so I interested in the state of development tools for the same core.

avr-gcc does /not/ push all registers when entering an interrupt. It does little for the credibility of your other points when you make such widely inaccurate claims.

avr-gcc always pushes three registers in interrupts - SREG, and its "zero" register and "tmp" register because some code sequences generated by avr-gcc make assumptions about being able to use these registers. Theoretically, these could be omitted in some cases, but it turns out to be a difficult to do in avr-gcc, and the advantages are small (for non-trivial interrupt functions). No one claims that avr-gcc is perfect, merely that it is very good.

Beyond that, avr-gcc pushes registers if they are needed - pretty much like any other compiler I have used. If your interrupt function calls an external function, and you are not using whole-program optimisation, then this means pushing all ABI "volatile" registers - an additional 12 registers. Again, this is the same as for any other compiler I have seen. And as with any other compiler, you avoid the overhead by keeping your interrupt functions small and avoiding external function calls, or by using whole-program optimisations.

I'll not argue with you about IAR producing somewhat smaller or faster code than avr-gcc. I have only very limited experience with IAR, so I can't judge properly. But then, you apparently have very little experience with avr-gcc - few people have really studied and compared both compilers in a fair and objective test. There is certainly room for improvement in avr-gcc - there are people working on it, and it gets better over time.

But to say "IAR is simply better" is too sweeping a statement to be taken seriously, since "better" means so many different things to different people.

To go back to your original statement, "The GNU toolchain can be OK, and it can be horrible", I agree in general - although I'd rate the range a bit higher (from "very good" down to "pretty bad", perhaps). There have been gcc ports in the past that could rate as "horrible", but I don't think that applies to any modern gcc port in serious active use.

Reply to
David Brown

If this is a chess game :-). Is the issue that you should declare conflicts of interest when expressing opinions about proprietary products. No reason to do so otherwise...

Regards,

Chris

Reply to
ChrisQ

Can you elaborate a bit as to why 2 stacks are used with IAR ?. Haven't user avr, so have no real experience. The AVR 32 has shadow register sets, including stacks for each processor and exception mode. Thus, separate initialisation on startup, but so do Renasas 80C87 and some arm machines. How does gcc work for arm, for example ?.

Regards,

Chris

Reply to
ChrisQ

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.