Code Red for ARM Cortex M3 development - any good ?

Hi,

I am hardening up on using Code Red 4 for my new STM32L15xx based project and I wondered if anyone here can tell me from experience whether they have had any issues or problems with this tool set ? From what I've seen so far it seems OK and they've been quitre responsive by email. Price is good so all I need is the gotcha list to make a decision :-)

Thanks,

Mike

Reply to
Mike
Loading thread data ...

Hi,

We've used Code Red for quite a few projects with NXP LPC17xx series, FreeRTOS and bare iron. We had a few issues (with linker scripts if I remember correctly and for those the solutions from their KB and support were found easily.

The ready FreeRTOS and other examples were very useful for us, too. The RedProbe+ sometimes hanged, but a reboot has always fixed it.

Great value for money in my opinion.

--
Mikko
Reply to
mi

it's gcc, so I assume it is mostly a matter of the ide

-Lasse

Reply to
langwadt

why using a closed tool when there are free and open development tools available with an increasingly active community?

I was once used to program ADSP devices via VisualDSP IDE and I was 'happy' with it. At that time I was a mildly happy Windows user not knowing anything about POSIX systems. But then I learned the advantages and the freedom of using GNU/Linux and coming back to a closed environment is simply not possible now.

Unless is a choice forced by some kind of partnership, I would never choose to limit my freedom and IMO even though a FOSS toolchain may not be the optimal solution for the target the overall benefit you get is huge.

Can someone here explain what is the gain of an IDE? I have to say that I hardly manage to find something rational or logical behind the IDEs, starting from the concept of 'project' that nearly all of them have but none of them describe formally.

IMO the power you have with a Makefile goes way beyond the fancy icons of an IDE, let alone that is far more portable and free from copyrights.

My 2 cents.

Reply to
alb

God I love having an IDE when I'm debugging. Visual breakpoints, register display windows, expanded peripheral registers, all of that stuff is golden.

And a good one also has good refactoring and code navigation tools that work across large numbers of files. I think you can get that level of power from emacs or vim as well, but I haven't managed to find it in any sensible editor.

But yes, their concept of a "project", and their firm belief that they know better than me how to organize one, is like dental surgery on a moving truck.

--
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
Email address domain is currently out of order.  See above to fix.
Reply to
Rob Gaddi

ever tried ddd? is a graphical front-end for command-line debuggers like gdb and will allow you to set breakpoints, display registers, variables and all that stuff.

I use vim and emacs with ctags which is a very powerful yet simple tool to index and tag all variables and functions definitions, which then can be easily navigated.

IMO if you have to work on a large number of files regularly I would reconsider the structure of your files :-)

Right. That is incredibly irritating.

And why reinvent the editor? each of these lousy IDE has its own editor which is irritatingly 20 years behind emacs or vim (I use emacs with VHDL extension and is a blessing!).

Reply to
alb

Which is why I use Eclipse almost exclusively. As long as it'll talk to the debugger, I get all the cool integrated debugging stuff, and when I save a file it invokes the makefile that I or a trusted coworker wrote.

--
Tim Wescott
Control system and signal processing consulting
www.wescottdesign.com
Reply to
Tim Wescott

What is that overall benefit? Better code generated more quickly? I've used GCC-ARM, Codewarrior, and IAR EW-ARM. I didn't find that GCC-ARM produced better code more quickly than the others.

IDEs are great at helping you generate complex make sequences without the necessity of typing file names, make commands, and options WITHOUT ERRORS. If I could type 100WPM without error and had memorized all the library file names, and compile options, I probably would have less use for an IDE.

To quote Lord Acton: "Power tends to corrupt, and absolute power corrupts absolutely."

Makefiles can get you in trouble as often as they get you out of it. Some projects, such as building a linux distro, probably need the flexibility of make files and the supporting linux tools. OTOH, many embedded projects running on bare silicon, reduce to a make file of perhaps 20 or 30 lines. For those, an IDE that allows you to pick the processor, organize the libraries, stack, and heap, compile, link, download and debug with a series of menu items can save quite a bit of time in looking up and correctly typing file names and make commands.

IMHO, a good IDE is a distillation of hundreds of hours spend by good programmers in distilling the essence of a programming environment down to a series of GUI elements. If it's done properly, you should never have to worry about misspelling a file name or generating a proper set of options for the compiler. If it's done really well, you can override any of the generated options and add you own.

At some point, you have to relenquish the power that you get from writing your own make files, peripheral drivers, and libraries. People who use IDEs just do so a little sooner in the hope that they can spend more time on the problem, rather than on the process.

My son is taking an upper-division Computer Science class on operating systems. So far, the instructor has insisted that all problems be solved using the Linux command-line interface. It seems that the teaching of OS fundamentals hasn't changed much since I was a CS instructor in the mid 80's. What do you want to bet that he and his fellow students come out of that class thinking that writing your own make files is the best way to program a computer? ;-)

Mark Borgerson

Reply to
Mark Borgerson

As long as the makefile is human readable, not a nested bunch of makefiles auto generated that are specific to that IDE.

One advantage I have found with makefiles properly constructed is they are quicker and easier to port to other processors/compilers/IDE than those autogenerated. In ten years time I cannot guarantee the auto-generated ones will be readable by newer versions of same IDE or compiler, not had problems with hand crafted ones.

Don't even get me started on IDEs that have 'hidden' makefiles as part of their custom format project/workspace control file(s).

With a makefile that is only 20-30 lines long it is just as easy to generate your own even with helper files for linker maps specific to processor. Too often I find autogenerated can lead to all sorts of ancilliary issues where you spend just as long finding out the build did not work because of some option on menus three levels down, or the third tab of option B, press button Z on this dialog to select this option. Getting auto generated is not a guarantee of success.

On a couple of IDEs with their 'integrated' project description and 'builder' control menus, I had to specify the processor three times for project, compiler and linker before it would work properly. This should have been specify once and use multiple times.

The amount of time it takes to generate a hand crafted makefile for an embedded app is not long and it will be used thousand of times more than edited. Unlike some IDEs at tiomes I have seen regenerate makefiles as part of each build!

--
Paul Carpenter          | paul@pcserviceselectronics.co.uk
    PC Services
 Timing Diagram Font
  GNU H8 - compiler & Renesas H8/H8S/H8 Tiny
 For those web sites you hate
Reply to
Paul

I guess that's another reason I like Eclipse: one click, and no more automatically generated makefile.

I will grant you: if all you want is prototype code in a hurry, and to hell with anyone that has to work on things a year from now, then an IDE can be a boon.

But if you want code that lasts, it can be Very Bad Indeed.

--
Tim Wescott
Control system and signal processing consulting
www.wescottdesign.com
Reply to
Tim Wescott

Sure, but that's a reason to get a decent debugger, not to get an IDE. Visual breakpoints or mixed asm/source view are pretty tricky if your source view doubles as a source editor.

And if everything is visual, how do you set a breakpoint on printf? Solutions I have found so far in IDEs are "it's not possible" (VC6), or "click 'window->show view->modules', 'functions by name', 'malloc .. strcpy' (wording varies), 'printf', copy the address (note that Ctrl-C does not work), click 'window->show view->disassembly', paste the address into a five pixel wide field, press enter, double-click" (Code Composer Studio v4, based on Eclipse). Or "navigate to the libc source code if you have it, lose if you don't".

Boy do I love the simplicity of a 'b printf' in gdb, Green Hills or Lauterbach.

Any sensible editor has support for ctags/etags, otherwise it doesn't deserve being called a sensible programmer's editor. This solves the navigation problem for me most of the time. Plus, my emacs has 1000 files open anyway, so most of the time opening a file is C-x b name-of-file.c RET, no need to navigate through a file system. Try that with Eclipse.

Stefan

Reply to
Stefan Reuther

This raises a few questions:

  1. Why does your emacs have 1000 files open?
  2. How do you pick the file you would like to edit among those 1000 files?
  3. What projects do you work on that have 1000 files?
  4. How much system resources does it take to manage those 1000 files?

Still, I suppose a thousand files isn't anything too extraordinary. After all, the W7 system that I'm using to write this post has 911 active threads, 26,987 handles, and 71 active processes.

Mark Borgerson

Reply to
Mark Borgerson

On 7/7/2012 9:24 PM, Mark Borgerson wrote: [...]

The benefit of using a FOSS toolchain, as I already said, does not necessarily comes from the optimal solution for the target, it comes from many aspects that IMO tend to come within it:

- standardization: the FOSS community has always paid great attention to the necessity of agreeing upon standards, from file formats to protocols and much more. A closed tool has not such an interest and from time to time even different versions of the same tool do not support file formats they originally created.

- support: there are many 'channels' on which you may get an incredible amount of support. Certainly proprietary tools also have their 'support', but when the problems get hard to solve the support you get may depend on the size of your organization.

- quality: 'peer review' is what lies behind the quality of your software. A widely open code can certainly receive much more reviews than a closed one.

- flexibility: if you are not happy with what it does you can fix it. Even though it is hard that you will mess around with a cross compiler, nevertheless there are very skilled people out there who may do that and share their improvement without the need to sell you yet another version of the tool.

- cost: even though you are working within a big company which may easily buy licenses, the aspect of the cost may be a show stopper for small businesses which eventually get stuck with a product and their 'updates'.

- freedom: the freedom to *use* the code the way you want (providing you do not violate the license agreement).

autogenerated 'make sequences' are very difficult to manage and certainly not easy to share between tools. The problem is not to type a file 'without errors', the problem is to know what you are doing with the 'make sequence', such that if you want to add/modify you can do it easily.

I compile my code with one command:

[me@work]$ make

How hard is that to remember? And before releasing my version I compile with the following:

[me@work]$ make 1>/dev/null

which will help me found out if there are any warnings which I missed on the way.

[...]
[...]

A Makefile may work with any structure of directories and files you may invent, while an IDE will force you use their own choice. And what will happen if in the next version they decided to move the 'library' directory (what a silly one) into a folder called 'work' (just to remind you that you are there to work) in the directory called 'release'? Would your previously defined 'make sequence' work?

Certainly they will be kind enough to let you 'import' the previous 'project' with a special button on the top left corner which launches a 'wizard' which will eventually say:

"Congratulations! you successfully imported the project ABC. Beware that you cannot import this project back!"

Personally, I don't find that quite amusing.

The 'essence of a programming environment' sometimes is what you want to have under your control, rather than somebody else's.

Solving the 'problem' - as you call it - is usually the smallest part in the 'process' of writing software. The infrastructure of your software comes from the overall 'process' and the less solid it is the more you will suffer it when your software will grow a little beyond 'hello world'.

The 'process' of software development doesn't differ very much from the 'process' of building a bridge or a car. Very seldom your 'problem' will hit you, unless you are doing cutting edge research, where nevertheless having control over the 'process' will certainly help your productivity.

Well, somebody once wrote: "East is East and West is West, and never the twain shall meet". It is clear to me that what you consider pros I see them as cons and viceversa.

I hope they will enjoy the class and I hope they will appreciate the freedom you have when working on a GNU/Linux system.

Reply to
alb

Are you saying that you don't have to change the makefile if the source files get moved? That hasn't been my experience unless ALL the files are in the same directory tree. Most of the IDEs that I've used allow one to move a whole project directory tree without problems.

A good IDE also allows you to define variables for various library and header file directories.

That's certainly true if your goal is to build a programming environment, rather than a standalone application. However, every minute you spend on controlling the environment is a minute you don't spend writing and documenting your application.

You're writing a different kind of software than me. I'm working on embedded applications for the Cortex-M3 and MSP430. Each project has between 10 and 15 source files and the final object code is from 20 to

60Kbytes. The most time-consuming part of each project is writing the code to set up and manage the peripheral devices (SD cards, USB ports, timers, A/D converters, etc. These projects also involve my designing the circuits, laying out the PCB and assembling and testing the board. When all those things are done, I have little time left over for controlling my programming environment.

I can see that if your goal is to generate code more efficiently, FOSS software may be a help. However, if your goal is to generate more efficient code, I don't see it being much help.

I do agree that FOSS software and tool chains are a good idea and good for the software developer in general. If nothing else, they provide some competition and benchmarks for the commercial tool vendors.

It's ironic that the appreciation of that freedom starts with a list of tools the students CANNOT use. ;-) Linux has a lot of good features--- but making it easy to directly control the peripheral devices is not one of them.

Mark Borgerson

Reply to
Mark Borgerson

Permanence. By using gcc, I know that I will always, always, always be able to get my exact toolchain back if I try hard enough. I won't have to try to contact a defunct company's license server to reinstall on a machine with a new MAC address or hard disk ID, or a still-existing company who will no longer provide licenses for "ancient" tools.

It might take me going all the way back to rebuilding gcc and newlib from source, and that might take me days, but it's not impossible.

--
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
Email address domain is currently out of order.  See above to fix.
Reply to
Rob Gaddi

Why not?

C-x b name-of-file.c RET

Or, actually, F10 (my shortcut for C-x b) namfic RET, using completion.

Automotive infotainment, from low level (boot loader) through the whole stack (drivers, file system, codec), to high level (HMI adaption).

The emacs process has a memory footprint of about 50-100 megabytes, containing all those files, and their undo data for the last half year.

About the same size as an Eclipse with no files open.

Stefan

Reply to
Stefan Reuther

WOW! you can keep track of 1000 file names in your head. I have trouble remembering the last 50 files I've used.

That's a much larger system than any I've worked on. Even the flight control system I worked on had only about 50-60 files---but we used pretty standard drivers for everything and depended on the OS (linux) to provide those, so we didn't have to recompile them.

Does the EMACs process have all the file structures, or does the OS have them and EMACS just keeps file handles?

The IAR IDE with no files open uses 47MB. Each file opened adds about

4K, so 1000 files would add only 4MB. I don't know how the editor would handle having that many tabs in the top bar, though. ;-)

With the C-Spy debugger running and several debugger windows open, EW- ARM goes up to 48.5MB.

Mark Borgerson

Reply to
Mark Borgerson

Confirmed. I had to build from scratch the g21k cross-compiler based on gcc-2.3.3 with my gcc-4.4.1. I needed to change few obsolete things, but after some struggle I did manage to compile the compiler and the binutils (linker, assembler...).

Certainly YMMV, but as a counter example, when I needed VisualDSP running I had to look for an old computer in order to install Windows98 and have it working with the EZ-ICE on the ISA. After having found the right hardware combination, installing network drivers with a 3.5inch disk, I decided to hell with VisualDSP and started my journey with g21k.

I'm still missing the possibility to hook up the emulator, but I decided I can build my own debugging tools within the code itself, which certainly will pay off when I will have no possibility at all to hook up anything since the hardware is in space.

Took me a little more than 'days', but I now feel much more satisfied than once was.

Reply to
alb

I agree. The high cost of application development using FOSS comes from tools instead of application focus.

Changes that are made by FOSS developers rarely go through the rigorous design and testing that commercial tools go through and as soon as applications become large enough then change change side effects start to take there toll on debugging and implementation time.

FOSS impacts commercial tools in a couple ways. They have forced the cost of commercial tools up primarily by impacting the easy sales. The support aspect of commercial tools has changed and become more formal with fewer releases and more attention being made to international standards.

In the automotive area for example not only are language standards being tested but there are becoming standardized regression tests that are used by tool vendors.

Most of the FOSS tools are using 20+ year old technology whose code generation is weak.

w..

Reply to
Walter Banks

The FOSS community has a lot of Ad Hoc standards but doesn't participate in standards groups and generally don't support formal standards. Commercial tools do and that makes them much more flexible to use in unusual combinations of code generation toolsets, simulators, emulators and debug devices.

This is especially true of language implementation in FOSS tools and conformance to IEC / ISO standards. I have been shocked that of all the work that has been done on tools sets very little has been done in the FOSS community on conformance testing.

w..

Reply to
Walter Banks

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.