STM32 ARM toolset advice?

The only difference to the proprietary closed-source compiler here is that the closed-source company won't tell you "unfortunately, our guru retired, and we don't know how it works" even if that's the truth. Or they tell you they're restructuring and are no longer interested in supporting this product.

I still don't see how that task had been easier for you if you didn't have source. The whole point is about having source, not about free. And I don't think the open-source libraries have worse quality than the proprietary ones. I have also fixed numerous bugs in the commercial libraries we bought. Including some the vendors refused to fix. Even trivial ones such as using 'char' instead of 'unsigned char' for marshalling, which happens to work on PCs, but not on our target.

That aside, guess what you see when you look into a commercial system? I found a NetBSD VFS, NetBSD IP stack, expat, gzip, Spencer's regexp.c, etc. in that we bought. So it cannot be that bad.

The points where I think OSS is "better" are:

- you can evaluate it easier. No need to sign advance NDAs or hand out money. You do not even need to wait for the mail package to arrive next week, and you can usually get honest opinions on it on mailing lists.

- you can look into it. Some commercial software also allows that, but by far not all. I haven't seen a commercial compiler that ships the source for its 'printf' yet.

- you can still modify it, even if its original author no longer wants to. Yes, this may be expensive, but at least it's possible.

I haven't found anything where closed-source software is fundamentally better. Warranties? Nobody guarantees you anymore than his software takes up disk space, and maybe gives you free replacement if the shipped CDs get unreadable within six weeks. Support? Commercial support can be had for OSS, too. Otherwise, support is simply structured differently than for classic closed-source SW. The big company you can sue if something goes wrong? Did anyone *ever* sue MS or IBM when their software failed?

Stefan

Reply to
Stefan Reuther
Loading thread data ...

e

er

o

all

he

f problems.=20

how

s
l

The first IAR EWARM version that I got (3.2) arrived with the full=20 source and make files for the libraries.

I've never had to recompile the IAR ARM libraries. I did find one bug, verified it in the source, and wrote my own version of the function, which the linker loaded instead of the library version. The bug was fixed in the next release.

Mark Borgerson

Reply to
Mark Borgerson
[...]

Which commercial "embedded" cross compiler comes without library sources? By "library" I mean not only the standard C libraries (printf etc.) but also the machine library, basic math functions (+-*/).

Today I use mainly the commercial "closed source" Cosmic compiler for the HC(S)12 and HC(S)08 - it comes with full library sources. When I found an error, it was always fixed extremly fast and the communication was very efficient. The same applies to suggestions for improvements.

The old (1992) HI-TECH C Compiler for the 68HC11 also came whith library sources.

IIRC, HIWARE (later Metrowerks, then Freescale) also supplied library sources for the HC(S)12 compilers. Can't say whether that's still true.

When I had a look at GCC for Coldfire some time ago, I wasn't even able to _find_ quickly these sources (wanted to check math implementation) in the bloat of files and directories GCC consists of. BTW: Any hints welcome.

Oliver

--
Oliver Betz, Munich
despammed.com might be broken, use Reply-To:
Reply to
Oliver Betz

Most if not all embedded system commercial compilers are shipped with full library sources.

Byte Craft ships library sources so that our customers have full application sources for their projects.

Regards

-- Walter Banks Byte Craft Limited

formatting link
snipped-for-privacy@bytecraft.com

Reply to
Walter Banks

Green Hills, for example. They're shipping the source code to 'open', so you can implement your own (which I did). But I haven't found yet the source code to 'fopen' or 'std::fstream', telling how they invoke it. Which means, when debugging my file system stack, I always have a bunch of assembly-only routines on my stack. Things like '__div32' are also missing, but for those I normally don't need source when I have a disassembler :-)

I don't know if this source can be had by moving a few more bucks over the counter. Maybe not, the header files have a Dinkumware copyright.

Normally, the keyword is libgcc, leading to the source file gcc/libgcc2.c; I'm not a gcc expert so I don't know whether that's everything gcc needs.

Stefan

Reply to
Stefan Reuther

Yes, that's everything gcc needs. It's not libc though it's just the internal functions to which gcc generates calls to do operations that aren't supported directly by machine instructions (multiple precision arithmetic, floating point, etc.).

If you're looking for sources for libc stuff, that's not gcc. That's glibc, newlib, uclibc, diet-libc, or whatever libc you've chosen.

--
Grant Edwards                   grante             Yow! I always have fun
                                  at               because I'm out of my
                               visi.com            mind!!!
Reply to
Grant Edwards
[...]

ha - the company which didn't respond to my first inquiry, and when I called them again, they told me that I have to buy at least three licenses to get a valuable customer.

I didn't understand this.

[...]

Thanks!

Oliver

--
Oliver Betz, Muenchen (oliverbetz.de)
Reply to
Oliver Betz
[...]

and huge #defines with inline assembler (or C) in .h files, e.g. longlong.h. Extensive dependencies... I guess that's the price one has to pay for having one copmpiler for so many targets, but who cares if the result is fine.

I will have a close look at the generated code when I start using GCC for the Coldfire. What I have seen so far is quite promising.

Oliver

--
Oliver Betz, Munich
despammed.com might be broken, use Reply-To:
Reply to
Oliver Betz

We ported all our 68K Code from commercial compilers to GCC for 68k. This is the same GCC used for Coldfire. The code was significantly faster using GCC.

Regards Anton Erasmus

Reply to
Anton Erasmus

Anton,

Did you find out where the GCC was faster?

Which compilers?

w..

Reply to
Walter Banks

We ported 2 main types of applications. One consists mainly of fairly complex axis transformations and data filtering, while also handling low latency comms to a host. The main task is executed at 100Hz and the maximum, minimum and avarage execution time is calculated every interrupt cycle. If I remeber correctly the gcc code was about

30% faster. The axis transformations is mostly scaled integer with a little bit of floating point. The second app was a one which displayed moving 2D icons over a live video image. No graphics acceleration hardware was available. Everything is done in software in a frame buffer. Again most calculations were done in fixed poit, with a little bit of floating point. If I remember correctly, it was overall 20% faster, with some low level graphic primitive routines almost 80% faster. On the graphics routines, gcc did much better at register allocation.

It was SDS and Microtec compilers. Both the compilers got more and more expensive over time. Initially their were big improvments in new versions. We stopped purchasing support when the new versions basically did nothing much over the previous versions, Microtec also started adding copy protection, which became a total pain to work with.

Regards Anton Erasmus

Reply to
Anton Erasmus

I've not used GCC for the M68K, but I have many years experience with Codewarrior 68K. I was able to speed up some loops by factors near two by using the DBRA (decrement and branch) instruction in assembly language rewrites of C code. This was generally only necessary in very tight loops for high speed data collection. The instruction set and architecture of the M68K made assembly language routines much simpler to write than is the case with the ARM.

The other common problem with Codewarrior (and other compilers I've used) is that there seem to be a lot of redundant register loads from stack-based variables. This may be because I generally set optimization to the lowest level. That generally makes it easier to read the assembly language output and single-step through the code.

Mark Borgerson

Reply to
Mark Borgerson

You set the compiler flags for low optimisation, and are surprised by getting sub-optimal code?

When you need to read or single-step generated assembly, it's often best not to have too low optimisation (or too high) - all these redundant stack accesses make the code hard to follow.

Reply to
David Brown

Why are you single stepping the machine instructions of the compiler output so much that this is an issue? Is your compiler unreliable?

Paul

Reply to
Paul Black

When I'm working on peripheral data transfers where I want=20 to transfer as quickly as possible, I quite often look at the generated assembly language. I never did find an optimization level for the M68K compiler where it used the DBRA instructions. Another reason that I keep the Codewarrior M68K compiler at a low optimization level is that it was recommended by the SBC vendor. This may have something to do with the fact that the compiler was really targeted for the PalmOS, but was being used with another vendor's libraries and hardware. There was a time when you could get Codewarrior for the PalmOS for about $400, while the standard Codewarrior M68K was over $2000.

I generally don't step through the M68K code, as the SBC that I use doesn't have good debug facilities.

I do sometimes step through MSP430 code using a JTAG debugger. The compiler that I use (Imagecraft) doesn't have a lot of optimization choices---but does have some redundant register loads.

Sorry if I got the two different cases mixed up in the original post.=20

Mark Borgerson

Reply to
Mark Borgerson

I seem to recall a classic example from an early 8051 compiler: If you set optimization high and to minimize memory, it would overlay variables in the limited RAM space. That made reading the assembly language pretty confusing at times.

Mark Borgerson

Reply to
Mark Borgerson

Mark,

The assembly can look confusing, but in a well implemented compiler the variable can be followed by symbolic name as the compiled code walks through the code. Physical RAM locations contain different variables depending on the current PC value. The ChipTools 8051 symbolic debuggers did a good job of tracking code in Keil's 8051 compiler as early as the mid 90's

The source level debugging code should be able to track a variable even when it temporarily resides in a register. This resolves cases where the local variable location is reassigned instead of being moved;

x and y both local

y = x; x = 29;

This code should not generate any code for y = x only a symbol table change and source level debug reference change..

Regards,

-- Walter Banks Byte Craft Limited

formatting link

Reply to
Walter Banks

The early 90's is about the time frame that I was using the 8051. IIRC, it was a small form factor package with only about 2K of EPROM. At the time I was using that 8051 chip, a PIC variant, the MC68HC16, and the M68K. I TRIED to stick with one chip or another for at least a week to minimize the context switch overhead, but was not generally successful. IIRC, debugggers at that time generally involved external hardware with emulator pods---which were well above the company budget limits.

I expect that if I ever go back to an 8051 variant, I will better understand the development system and expect better debugging facilities. However, as I'm in a low-volume market where unit cost is not a major constraint, I'll probably stick with the MSP430 series for very low power systems and one or another of the ARM series where I need more processing power.

Mark Borgerson

Reply to
Mark Borgerson

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.