Codewarrior vs. Cosmic C for Freescale 9S08

I have the fun (?) task of picking a compiler/debugger to use for the Freescale MC9S08 family of microcontrollers. I have narrowed it down to two: Codewarrior from Metrowerks (a Freescale company), and Cosmic C. I have used Cosmic C before in a command line mode, but did not have the debugger and so did not use it. Another peer has used Codewarrior and generally liked it. I have CW installed and am starting to play with it and it seems a little weird, but I am still trying to get used to it.

I am wondering if there is an advantage to using either one. Cosmic C is about three times the cost of CW, but I am wondering if it might be worth the extra cost. For me personally, I do not use the IDE to write code; I use CodeWright for that. So, I am not real concerned at how well either one is at code entry. Others on my team might want a good IDE however.

Has anyone out there used both of these products and can give me any recommendations on one over the other? Thanks.

Reply to
Mr. C
Loading thread data ...

I'm probably just going to echo some of your own feelings, but here goes:

At a previous employer, we used Cosmic for HC908 projects. Cosmic does a good job. And I prefer CodeWright and make to any IDE, probably because I don't have to learn (and remember) new tools every time I switch target platforms.

I once received a demo CD for CodeWarrior with an eval kit, but never used it, so I can't relly comment on that. All the sample code that came with the kit used the impenetrable Code Warrior project files, and were useless to me for that reason. I guess that's one reason I dislike IDE's -- nonstandard binary project file formats that lock you into the tool.

HTH, -=Dave

Reply to
Dave Hansen

Yes, that and that one can not diff those files against prior versions in CVS or SVN and make any sense as to what changed from one step to the next.

There is nothing as terse or complete to compare with a Makefile at packing all of a project's particulars in one place.

Reply to
David Kelly

I hate IDEs, and I find that if you're trying to work from Makefiles and the command line, Codewarrior is a pig. It's not even a particularly nice IDE as IDEs go, in my opinion.

Cosmic have a very long track record of good compilers on the '08, and I always found their technical support to be absolutely first-rate. Their product may not have much glitz but it has a lot of integrity. It *works*. It generates pretty good code (the only comparable compiler for the '08 in my experience was the old BSO/Tasking one). There used to be some gotchas in the preprocessor (## and # were handled oddly) and some gaps in describing the reentrancy or otherwise of some of the runtime library functions, but they were fixed a long time ago. The Cosmic debugger did in its early days (I'm talking nearly 10 years ago here!) have quite a propensity for spitting out plaintive and rather desperate half-translated error messages in Franglais if the ICE had problems, but again nothing that's a showstopper.

pete

--
pete@fenelon.com "there's no room for enigmas in built-up areas"
Reply to
Pete Fenelon

On Feb 28, 3:04 pm, Pete Fenelon wrote: [...]

[...]

You remind me...

The Cosmic compiler I used was from 2000, so this may no longer be an issue, but of all the different C compilers I've ever used, Cosmic generated the most useless error messages. Not because of "Franglais," but because the error was usually emitted nowhere near, and was completely unrelated to, the actual problem.

However, since I used PC-lint, I usually didn't have too much trouble tracking down the real problem... Unless the real problem lay in the way the preprocessor expanded its macros. Even then, I could lint the Cosmic preprocessor output, it just wasn't as convenient.

Again, I like the Cosmic compiler, it's a good tool, and I wouldn't have any reservations about purchasing it for any project requiring an HC08 compiler. (In my current job, I don't have _any_ 8-bit code at the moment). But I would recommend getting (and using!) PC-lint. For _any_ project involving C programming.

Regards, -=Dave

Reply to
Dave Hansen

Absolutely. The boys from Gimpel produce something that's worth its weight in neutronium. It's even better at spotting subtle C++ screwups.

pete

--
pete@fenelon.com "there's no room for enigmas in built-up areas"
Reply to
Pete Fenelon

Thanks guys. Wow, it's amazing how similar we are in interests and dislikes. I also HATE IDEs with a passion. I am one of those diehards that will use Codewright until I have to run some new platform that will not run it. Not to get off on that subject, but I wonder why somebody doesn't buy Codewright and start selling it again. Actually, Borland still sells it, but does not support it and hasn't for several years - too bad. Someday, I will probably have to select a new editor that will also go unsupported a few years later.

I used to run MKS make that runs under the MKS Toolkit's Korn shell in Windows, making things a lot like the Unix environment. The Cosmic C compiler and linker all ran in command line mode from the make process. But I am picking this compiler for our team to use and I don't think the other guys will get into the make environment. I could try to be an evangelist about it, but I am not sure I would get many converts.

After I wrote this post, I have been playing with both CW and Cosmic and have been concentrating on their debuggers. I was almost convinced Cosmic was the better choice as a compiler, but their Zap debugger is just plain strange to me (but CW's debugger is not much better). I tend to use a debugger more for unit testing than I do for "de-bugging" a program. It would be nice if it kept track of watch variables and breakpoints between sessions, but maybe I am missing some option to do that. The dubugger looks to be poorly written, making some features hard to use. I am wondering if there is something new I need to understand to make the debugger easier to use. I will try to look for a tutorial on how to use it. Maybe that will make things better.

Thanks again for your inputs.

Lou

Reply to
Mr. C

What does a gm of neutronium go for these days?

I secon the endorsement though. Even more I'd suggest running lint before the compiler. Only let the compiler at it when it lints 'clean'. I only run into warnings and errors from compilers now for items on which the compiler is wrong or for which lint really can't help (like not putting all of your object files into the link).

Robert

--
Posted via a free Usenet account from http://www.teranews.com
Reply to
Robert Adsett

... snip ...

For Windoze use, why don't you just load up the DJGPP system? Their port of gnu make even emulates some bashisms, for convenience, and you can always just run bash as the shell.

--
Chuck F (cbfalconer at maineline dot net)
   Available for consulting/temporary embedded and systems.
Reply to
CBFalconer

From dim, distant and fairly unpleasant memory, there were differences in the handling of case-sensitivity between DJGPP and MKS. I know that at one point we had a limited number of MKS licences and moved a lot of test harness stuff for our command-line tools over to DJGPP and still kept finding weird edge conditions many years later.

Cygwin is a nicer solution than either, these days ;)

pete

--
pete@fenelon.com "there's no room for enigmas in built-up areas"
Reply to
Pete Fenelon

Same experience here. I always run lint before compiling. I have it as a task I can run right from the editor so it's easy. For folks that have not used lint before, I describe it as a compiler on steroids that doesn't generate any code. Quite simply, I would feel lost without it.

Lou

Reply to
Mr. C

I third that endorsement. I use PC-Lint and the Cosmic HC05 and HC11 cross compilers and have found all three to be great tools. I usually compile first, just to make sure I haven't made any gross errors and then I let PC-Lint have at it.

Jim

Reply to
James Beck

One of the reasons I run lint first is that it's often better at finding (and locating the origin of ) gross errors than the compilers are.

Robert

--
Posted via a free Usenet account from http://www.teranews.com
Reply to
Robert Adsett

These days using MSYS and MINGW makes more sense under windows. There are also quite a number of IDEs available that uses MINGW as the underlying system. Something like dev-cpp uses MINGW and generates make files using the IDE. These make files can be used stand alone with gnumake.

Regards Anton Erasmus

Reply to
Anton Erasmus
[...]

Even if some contributors here don't like it, I'm happy with Cygwin.

Why not? Set up a working environment (including an editor) and it won't be too hard for them. There are many editors around with a powerful error message parser (I'm still using MED). Look at

formatting link
- if you like CodeWright, Source Insight might be worth a look for it's realtime references.

Also to me.

Try

formatting link
for a cheap solution.

If you want to spend some more money,

formatting link
gives you more power but the software is rather bloated. I'm using the iC3000 for HC(S)12 development.

Oliver

--
Oliver Betz, Muenchen (oliverbetz.de)
Reply to
Oliver Betz

The mixed case is for human consumption.

It is easier to grasp WhatIsThisThing than whatisthisthing or WHATISTHISTHING. If single case is used, people would often write what_is_this_thing or WHAT_IS_THIS_THING.

If you effectively want to exploit a case sensitive system, then you should actively use WhatIsThisThing, wHATiStHIStHING, whatisthisthing and WHATISTHISTHING as four different files or four different variable names. This is OK for program generated data which is consumed by an other program (such as assembly code generated by the compiler and consumed by an assembler), but this is disastrous when humans read or write these names, causing a lot of confusion and mistakes.

The case sensitive system is very awful when you are doing telephone support, when you have to tell which letter is uppercase and which is lower case. In such case sensitive systems, when you might expect that telephone support is needed, it is best to define from the beginning that everything is either lower case or everything is uppercase. Then why use a case sensitive system at all ?

Paul

Reply to
Paul Keinanen

What languages and cases are you referring to ?

While it might make sense to equate AÄÅaäå to be same as A for searching and sorting, in other languages A, Ä and Å are different characters. For example, in Swedish, the alphabet is ABC...XYZÅÄÖ.

Clearly, this is a locale dependent issue.

Paul

Reply to
Paul Keinanen

It's nothing to do with the encoding. The problem is, say for example in Turkish, that you have four different cases for the letter "i" upper and lower in both dotted and non dotted variants.

-p

--
"Unix is user friendly, it's just picky about who its friends are."
 - Anonymous
--------------------------------------------------------------------
Reply to
Paul Gotch

Sure - same sort of thing happens in many European languages too, with accents and even double-single letter pairs (cf Spanish that collate "ll" just after "l", Germans that regard ss as interchangeable with the single ß). So what?

You just decide what strings are regarded as "too close" to an existing name when you get a request to create a new file, and you stick by it, applying the same matching when opening a file, and preserving the original content of all names. It doesn't even really matter much if you get it a bit wrong, as long as it's consistently wrong so you don't create an ambiguous situation. Users will live with it, and name their files appropriately. Note I'm not suggesting any kind of case folding here, just matching names on similarity.

Reply to
Clifford Heath

Clifford Heath wrote: [case-folding in filesystems]

Upthread you proposed UTF-8 to do that. The practical problem with this is that many programs still don't do Unicode (last time I tried, Word responded to my doubleclick at a cyrillic file name with a complaint about being unable to open "??????.doc"), and case-folding for Unicode needs large tables. Nothing I would want in a kernel, or in a small embedded system.

And if someone finds a bug in your case-folding routine, he has a nice security hole. All layers of your system must agree about the case-folding mechanism. For example, older DOS versions had bugs in their file name parsers that allowed people to create directories with forbidden names (such as "NUL"), which the file name parsers of other system calls refused to access. We used that to annoy our teachers :-)

My position is: a Unix kernel does not touch file names. Every character is allowed. Users can thus easily use any policy they like. UTF-8 seems common, 8-bit code pages as well, just depending on the programs / configuration you use. Users who want a case-insensitive filesystem under Unix can "easily" have it by installing a toolchain/library that emulates it (emulating the other way around is harder). The non- existance of such a toolchain may be a clue for how many people want that (or for how ignorant developers are).

Stefan

Reply to
Stefan Reuther

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.