8051 on-chip debugging

This is an extension of an existing product. The issue is to have minimum development investment. That, and the fact that he doesn't trust programmer's time estimates. ;-)/2

I have enough problems dealing with other's hardware. Software? Forgetaboutit.

Reply to
krw
Loading thread data ...

I know a company who bought the Raisionance (4 seats) According to them there were so many bugs and problems they went out and bought the Keil system to save the project. They said there was no comparison between the 2 systems.

Also the Rasionance does not support anything like the same number of

8051 parts.
--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

I don't suppose you are a Senior Engineer Raisonance as the linked in entry for Bruno Richard says?

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

And, I don't suppose you have any connection or interest in Keil or Hitex either ?.

We all know that Keil C for 8051 is an excellent product, but you do sound like and evangelist yourself sometimes. Slagging off any company as you did in your last post is not what is expected from a professional engineer...

Regards,

Chris

Reply to
ChrisQ

Do you have a reference for that ?. Must admit that it's difficult to find performance ratings and comparisons for embedded processors in general, never mind compilers. In the past, have had to gather info from may sources to build an mcu performance table, but hadn't thought much about compiler comparison.

Unbelievable, like they though that no one would notice ?. As you say, probably not much difference in performance these days. The gcc team do nothing but spend all their time on gcc and they get funding from many sources, so don't really see how any independent commercial toolchain vendor could bring the same resources to bear. As you imply, the differences may be more in the (math ?) libraries, but have no direct experience of that.

W/regard to the ongoing toolchain build saga, have been writing the header files for M3. I know it's included from st and you also have all then cmsis (?) library stuff as well, but it's not to the house coding standard here and writing the header files and peripheral register definitions gives you a good insight into the more subtle capabilities of the device, A lot of which you might miss if you just use the provided files. It's a bit time consuming, but something I always do and is a worthwhile investment that you only need to do once...

Regards,

Chris

Reply to
ChrisQ

No interest in Hitex other than I used to work for them over 6 years ago.

I can supply many compilers myself (over half a dozen brands including GCC and Keil)

I worked or many years as an Engineer using Keil compilers. I have used them in depth on real high integrity development and know personally many others who are pushing the 8051 family to it's limits.

We have also validated compilers for safety critical use.

However recently I have had to give support to some one with the Resonance 8051 compilers who switched to Keil in mid project because of the problems. I expect Bruno will be familiar with them. Though I don't think this is the place to detail all the problems or the client.

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

Some time ago, I compared GCC with ARM ADS 1.2, and found that the ARM compiler generated about 10% smaller code, which was significant enough to use it (I had a 4KB code limit for that project). This was using Thumb mode for ARM7.

In the mean time, both the ARM and GCC compilers have improved, and I haven't done any more comparisons.

As far as the other properties, I preferred the GCC compiler, due to its much better extensions, better assembler syntax, and more powerful linker scripts.

Having to deal with the license server was also a royal pain when traveling with a laptop, and trying to get some work done without a good network connection.

Reply to
Arlet Ottens

Many commercial compilers in their licences do not permit the publishing of any benchmarks. I think it is particularly true of the American ones as the US has had a culture of advertising where direct (and usually negative) comparisons to competitors is permitted. Europe it is not common and in many places prohibited. This may explain why it has come about.

However... most commercial compiler companies usually manage to get hold of all their competitors compilers and run various benchmarks. The results are obviously confidential.

From what I have seen apart from the usual Drhy and Whet stones most run a lot of "real world" benchmarks of compiling large applications etc.

The standard benchmarks are of little use unless you are building benchmarks. BTW from personal experience compiler companies do not specifically optimise compilers to be good at the dry/whet stone type benchmarks. There is little point.

Many will have applications from some lead clients, with permission, that they have done tech support for. So there is a lot of real world application bench marking going on. It just does not get out into the public domain.

They tend to do "real world" benchmarks because anything else is artificial and of little real use. This is because the bench marking is for internal use for the compiler development teams to see how their compiler stands up in reality to others. No one has customers writing Drhystones applications :-)

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

I don't have any references, but I believe it is quite common to have restrictions on publishing benchmarks, comparisons or reviews of tools without explicit permission - most commercial EULAs will include something to that effect. It is understandable - it is very hard to do a decent objective comparison and benchmark, so it is very easy to put tools in a bad light even if you don't intend to. Commonly used benchmark code is usually totally meaningless to embedded systems (who is going to use their embedded system to calculate dhrystones?) and come with meaningless restrictions (disallowing inlining, etc.). To be useful, you really need to test a compiler on your own code - and most suppliers can give you a demo or evaluation edition.

Still, it would be nice to see some clear independent comparative benchmarking for toolsets. As it is, most of what we see is either totally biased in favour of a sponsoring company, or anecdotal claims that are typically long out of date.

When it comes to a C-friendly architecture like ARM, you are right - gcc and "big name" commercial companies will produce code of similar quality. Different compilers will be slightly better at different code, and there will be variations in the balances between size and speed. Short sequences of code will typically generate almost identical code, but there will be bigger differences with larger blocks. But overall, the quality will be similar (especially since gcc got full inter-module optimisation support).

It is a different matter for compiler-unfriendly architectures, like the

8051 (for which there is no gcc port). Smaller processors benefit from more dedicated compilers, while gcc aims to support a wide range of targets.

It is also worth noting that although gcc is supported by many companies, much of it is targeted at non-embedded targets. For example, Intel, AMD and Red Hat all provide money and developers, but their main interest is in gcc on x86 platforms. gcc on the ARM will benefit to some extent from this work - many optimisations and features are platform independent. That is why gcc has more support for modern C and C++ standards than most commercial embedded toolchains - embedded gcc targets get it "for free" from the x86 world. But while ARM, Code Sourcery, and many other companies directly support arm-gcc, I don't expect there will be as many people working on arm-gcc as there are, for example, on arm-Keil.

Reply to
David Brown

As ChrisH says, it's in the license conditions of the commercial compilers. The big players anyway.

We used to have a guy on here who was extremely knowledegable about the ARM compiler libraries and the CM3, I think he wrote a lot of them. In any case he regularly trashed the gcc maths library implentations and I do tend to believe him, much as I like gcc.

That's exactly what I do too but it seems like an uphill task in this case. There's an awful lot of cruft in there. Still not decided which way to go. I think I will likely use the core_cm3 files but ignore all the stm ones.

What I usually find with these things is that there is a UART library say with a few dozen functions to set/reset/query every individual configuration bit, status flag and operating mode. And it still turns out to be just a lame polling driver rather than interrupt driven.

Whereas in my own code it ends up being a couple of config register writes and off you go. I don't *care* about IrDA mode or whatever. Yes, the compiler will remove a lot of the unused stuff but its still there to confuse me when I go bug hunting.

It's very nice to have it all available as a reference though.

--

John Devereux
Reply to
John Devereux

I have not seen this for compilers as I only use gcc, but it's common practice for database companies and it is usually enforced as part of the T&Cs for the product, IIRC.

It would not surprise me at all if the commercial compiler companies were doing the same thing.

There are some things I have not seen mentioned yet.

First, a increasing number of people are using something other than Windows these days. The last time I checked, many of the commercial compilers were Windows based only.

I just checked the Keil website, and it appears that the ARM evaluation product is still only Windows based.

Before someone points out that Linux desktops are a small percentage of the overall desktops, I would say you are correct, but I would also say that it doesn't matter because I see a far higher percentage of technically aware people choosing a non-Windows platform than the general figures would seem to suggest.

If you have a choice between running a toolchain on your preferred operating system or running one on the vendor's preferred (different) OS, which one are you going to choose ?

Don't confuse been _forced_ to use a operating system with _wanting_ to use a operating system. Also, don't forget that, at least in the commercial business programming world, the people comfortable with open source are holding increasingly senior management positions. I would be surprised if the same was not happening in the commercial embedded world as well.

There seems to be the same type of dismissive denial coming from the commercial compiler companies which once came from Microsoft before Microsoft finally woke up to the reality of a developing open source world.

BTW, I also really like been able to use the same tools for embedded development and for native development.

I like to do the same as well when it's practical. I don't know about you, but I also extend this to things like linker scripts, makefiles and startup code, even seeing why one vendor might use some linker/startup options, but another one does not. I too find it gives me a deeper understanding of what is going on.

Sometimes however, looking at the vendor code makes you realise things you have never really considered. For example, the Atmel AT91 software library does not have a valid vector table for the sam7s until _after_ the remap has taken place. I initially thought that was way wrong but then I realised that there was little point in defining a vector table if you hadn't yet setup a stack pointer for the mode in question or completed other initialisation.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

In message , David Brown writes

Not that common... However there are several main players who do prohibit the publishing of benchmarks. So there is little point in the others publishing their own. Benchmarks are only of any use when comparing things.

Also anyone will scream blue murder that the figures and or testing is fixed if they come out at the bottom or not at the top.

However all the benchmarks I have seen (these are internal as they usually include compilers where you can't publish benchmarks) are normally using not just the standard benchmarks but also a number of large reference programs. Often from customers.

This is very true. I always tell people to do this.

Never going to happen.

Well I have several sets of tests from 12 and 6 months ago. None of which I can publish or name of course as they are a confidential and b contain benchmarks for compilers you can't benchmark....

It is increasing to see trends from one set to anther.

Not according to the information I have.

This is true. The smaller the MCU the more specific the compiler has to be.

Yes it is a generic system not really suited to embedded systems

That is not the reason. C99 was nto wanted or needed by the embedded community. Hence the reason why no mainstream embedded compiler supports C99 12 years on. BTW having a C99 switch does not mean it supports C99!

As mentioned elsewhere the ISO C committee are moving to make some C99 features "optional"

It is not the quantity.

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

In message , John Devereux writes

Not all of them.... tends to be a N. American thing.

I thought there were various different sets of libraries for gcc?

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

In message , Simon Clubley writes

Obviously not looked recently then

formatting link

I don't see that but I only tend to talk to about 30 different companies a week.

The one that is most appropriate?

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

Well I am not at all an expert but yes, in principle. Here is my understanding:

1) There are things that appear as "inline" code, like basic integer arithmetic and conversions between one integer type and another, These are actually the vast majority of many projects embedded code and what I mean when I say there is "little difference in compiler output". 2) Then there are functions that cannot easily be expressed inline, like floating point versions of basic arithmetic functions (on an integer processor). I think these are in gcclib, part of the FSF gcc distribution. I believe there are a couple of implementations that in principle you can choose from when building gcc yourself. But the distribution will normally set this for you automatically according to the selected target. 3) Then there are things like trig functions, these require explicit linking with a math library, for example the libm component of newlib.

I think some of the commercial gcc vendors provide their own implementions of 3). Don't know about 2).

--

John Devereux
Reply to
John Devereux

Thanks for the pointer; it's nice to see that some commercial people are supporting Linux as a development environment.

I do note that the product seems to be built on top of gcc and Eclipse instead of been built from the ground up with it's own proprietary compiler and development environment.

BTW, I looked at the Keil website because it's the one you always seem to be mentioning and a quick look didn't reveal any non-Windows support at all.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

I am sure there are /some/ vendors who are honest enough to admit that they don't compete on code speed or size, but on other factors (support, price, etc.) :-)

I believe that if you are going to publish something, you should do your best to make it fair and impartial. Obviously this is very hard to do in comparative benchmarking, so most people should refrain from publishing such information - especially if they are in a position of authority.

But if you /do/ want to publish benchmarks, you should start off with some clear objectives - define the code you want to run and the target, which should ideally be a freely available simulator. (Free, so that anyone can duplicate your results if they have the compiler.) Your code should be freely available. Vendors should be contacted and asked if they have suggestions or comments. Multiple results should be published, such as optimising for size or speed, using "standard" compiler flags and using "specialised" compiler flags (perhaps suggested by vendors), and maybe also extra results for code variations that have the same resulting output but smaller or faster target code. There must be enough information and code that anyone can repeat the tests, and vendors must have the chance to make comments.

Obviously for internal benchmarks used by vendors, the rules are totally different - as is the purpose of the benchmarks.

Maybe not, but it would still be nice to see!

One big hurdle here is who is going to pay for it all? Doing a proper job would be a lot of work, even for just one target, and it must be kept up-to-date.

While I understand exactly where you are coming from, and why you cannot give any more information, any comments you make about compiler performance is about as useful as a new claim for cold fusion. Unless you can give concrete names and numbers that others can reproduce, it's just marketing waffle - not science.

I base my claims on my understanding of compilers, my fairly extensive use of gcc on many architectures, and my usage and testing of various commercial tools against gcc on several architectures. But like you I cannot give details, and so my claims are anecdotal (they are not marketing waffle, since I'm not selling anything).

But like you, I recommend people try out different tools before deciding. It is also important to look at other aspects of the tools - object code size and speed is only one point, and it is not often the most important issue when picking a tool.

gcc is a generic compiler system - that's true. It supports dozens of hosts, many dozens of target processors, and several languages (C, C++, Ada, Objective-C, java, go, etc.). That all has advantages and disadvantages.

But it is /not/ true that it is "not really suited to embedded systems". It was not originally /designed/ for embedded targets, but it works extremely well in that area. Much like the C language itself, really.

It is important to make some distinctions here - there are C99 features that few embedded developers have any use for (and some that no one has much use for). But there are others that /are/ useful. This is why very few (if any) toolchains can claim full support for C99, but most support a fair number of the C99 changes. The same applies to the C++ standards, and the work-in-progress C1X standard. gcc has always been a forerunner here, with early support for at least the most useful new language features. Some commercial toolchains are also good at supporting new features - others are pitiful.

Here are a few C99 features that /are/ useful to embedded programmers (of course, none are /needed/, strictly speaking):

  • inline functions
*
  • // comments
  • designated initializers and compound literals
  • mixed declarations and code

Other features such as improved wide chars, non-ASCII identifiers, and variable-length arrays are of less use.

While it is certainly true that all developers are not equally good, you are on /very/ shaky ground if you want to imply that the programmers working for Keil and other commercial toolchain developers are somehow "better" than those who work on the development of gcc. You have no possible way to justify such a claim - even if you have evidence to back it up, which I sincerely doubt, there is no way you can publish any such evidence. Let's just say that you personally think that certain unnamed commercial toolchain developers do a better job at compiler development than gcc - and leave it at that.

Reply to
David Brown

The large 16 and 32 bit systems of yesteryear are the microcontrollers of today. The assumption of 16 and 32 bits ints, and efficient access via pointers and stack crippled 8 bit CPUs. But it works extremely well on an ARM say. Furthermore new parts are usually explicitly designed with C in mind. So C actually becomes a *better* fit as time goes by.

And so, by extension, does gcc, even though it too was created for "large" (non-embedded) systems.

[...]
--

John Devereux
Reply to
John Devereux

There are. I alway use the C library which comes with MSPGCC (MSP430) for ARM projects. It is very small. If space constraints are really tight I switch to a small implementation of printf/sprintf.

--
Failure does not prove something is impossible, failure simply
indicates you are not using the right tools...
nico@nctdevpuntnl (punt=.)
--------------------------------------------------------------
Reply to
Nico Coesel

It even says so:

QUESTION

I need a Linux version of the Keil compiler and other tools. Do you support this platform?

ANSWER

Currently, the Keil tools don't support any UNIX platforms and we have no plans to support Linux in the near or even medium distant future.

formatting link

Seems strange that they wouldn't even port command line tools to Linux, so they can run under Eclipse, or in a bare 'make' environment. The amount of work should be trivial.

Reply to
Arlet Ottens

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.