8051 on-chip debugging

I don't suppose you are a Senior Engineer Raisonance as the linked in entry for Bruno Richard says?

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H
Loading thread data ...

And, I don't suppose you have any connection or interest in Keil or Hitex either ?.

We all know that Keil C for 8051 is an excellent product, but you do sound like and evangelist yourself sometimes. Slagging off any company as you did in your last post is not what is expected from a professional engineer...

Regards,

Chris

Reply to
ChrisQ

Do you have a reference for that ?. Must admit that it's difficult to find performance ratings and comparisons for embedded processors in general, never mind compilers. In the past, have had to gather info from may sources to build an mcu performance table, but hadn't thought much about compiler comparison.

Unbelievable, like they though that no one would notice ?. As you say, probably not much difference in performance these days. The gcc team do nothing but spend all their time on gcc and they get funding from many sources, so don't really see how any independent commercial toolchain vendor could bring the same resources to bear. As you imply, the differences may be more in the (math ?) libraries, but have no direct experience of that.

W/regard to the ongoing toolchain build saga, have been writing the header files for M3. I know it's included from st and you also have all then cmsis (?) library stuff as well, but it's not to the house coding standard here and writing the header files and peripheral register definitions gives you a good insight into the more subtle capabilities of the device, A lot of which you might miss if you just use the provided files. It's a bit time consuming, but something I always do and is a worthwhile investment that you only need to do once...

Regards,

Chris

Reply to
ChrisQ

No interest in Hitex other than I used to work for them over 6 years ago.

I can supply many compilers myself (over half a dozen brands including GCC and Keil)

I worked or many years as an Engineer using Keil compilers. I have used them in depth on real high integrity development and know personally many others who are pushing the 8051 family to it's limits.

We have also validated compilers for safety critical use.

However recently I have had to give support to some one with the Resonance 8051 compilers who switched to Keil in mid project because of the problems. I expect Bruno will be familiar with them. Though I don't think this is the place to detail all the problems or the client.

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

Some time ago, I compared GCC with ARM ADS 1.2, and found that the ARM compiler generated about 10% smaller code, which was significant enough to use it (I had a 4KB code limit for that project). This was using Thumb mode for ARM7.

In the mean time, both the ARM and GCC compilers have improved, and I haven't done any more comparisons.

As far as the other properties, I preferred the GCC compiler, due to its much better extensions, better assembler syntax, and more powerful linker scripts.

Having to deal with the license server was also a royal pain when traveling with a laptop, and trying to get some work done without a good network connection.

Reply to
Arlet Ottens

Many commercial compilers in their licences do not permit the publishing of any benchmarks. I think it is particularly true of the American ones as the US has had a culture of advertising where direct (and usually negative) comparisons to competitors is permitted. Europe it is not common and in many places prohibited. This may explain why it has come about.

However... most commercial compiler companies usually manage to get hold of all their competitors compilers and run various benchmarks. The results are obviously confidential.

From what I have seen apart from the usual Drhy and Whet stones most run a lot of "real world" benchmarks of compiling large applications etc.

The standard benchmarks are of little use unless you are building benchmarks. BTW from personal experience compiler companies do not specifically optimise compilers to be good at the dry/whet stone type benchmarks. There is little point.

Many will have applications from some lead clients, with permission, that they have done tech support for. So there is a lot of real world application bench marking going on. It just does not get out into the public domain.

They tend to do "real world" benchmarks because anything else is artificial and of little real use. This is because the bench marking is for internal use for the compiler development teams to see how their compiler stands up in reality to others. No one has customers writing Drhystones applications :-)

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

I don't have any references, but I believe it is quite common to have restrictions on publishing benchmarks, comparisons or reviews of tools without explicit permission - most commercial EULAs will include something to that effect. It is understandable - it is very hard to do a decent objective comparison and benchmark, so it is very easy to put tools in a bad light even if you don't intend to. Commonly used benchmark code is usually totally meaningless to embedded systems (who is going to use their embedded system to calculate dhrystones?) and come with meaningless restrictions (disallowing inlining, etc.). To be useful, you really need to test a compiler on your own code - and most suppliers can give you a demo or evaluation edition.

Still, it would be nice to see some clear independent comparative benchmarking for toolsets. As it is, most of what we see is either totally biased in favour of a sponsoring company, or anecdotal claims that are typically long out of date.

When it comes to a C-friendly architecture like ARM, you are right - gcc and "big name" commercial companies will produce code of similar quality. Different compilers will be slightly better at different code, and there will be variations in the balances between size and speed. Short sequences of code will typically generate almost identical code, but there will be bigger differences with larger blocks. But overall, the quality will be similar (especially since gcc got full inter-module optimisation support).

It is a different matter for compiler-unfriendly architectures, like the

8051 (for which there is no gcc port). Smaller processors benefit from more dedicated compilers, while gcc aims to support a wide range of targets.

It is also worth noting that although gcc is supported by many companies, much of it is targeted at non-embedded targets. For example, Intel, AMD and Red Hat all provide money and developers, but their main interest is in gcc on x86 platforms. gcc on the ARM will benefit to some extent from this work - many optimisations and features are platform independent. That is why gcc has more support for modern C and C++ standards than most commercial embedded toolchains - embedded gcc targets get it "for free" from the x86 world. But while ARM, Code Sourcery, and many other companies directly support arm-gcc, I don't expect there will be as many people working on arm-gcc as there are, for example, on arm-Keil.

Reply to
David Brown

As ChrisH says, it's in the license conditions of the commercial compilers. The big players anyway.

We used to have a guy on here who was extremely knowledegable about the ARM compiler libraries and the CM3, I think he wrote a lot of them. In any case he regularly trashed the gcc maths library implentations and I do tend to believe him, much as I like gcc.

That's exactly what I do too but it seems like an uphill task in this case. There's an awful lot of cruft in there. Still not decided which way to go. I think I will likely use the core_cm3 files but ignore all the stm ones.

What I usually find with these things is that there is a UART library say with a few dozen functions to set/reset/query every individual configuration bit, status flag and operating mode. And it still turns out to be just a lame polling driver rather than interrupt driven.

Whereas in my own code it ends up being a couple of config register writes and off you go. I don't *care* about IrDA mode or whatever. Yes, the compiler will remove a lot of the unused stuff but its still there to confuse me when I go bug hunting.

It's very nice to have it all available as a reference though.

--

John Devereux
Reply to
John Devereux

In message , David Brown writes

Not that common... However there are several main players who do prohibit the publishing of benchmarks. So there is little point in the others publishing their own. Benchmarks are only of any use when comparing things.

Also anyone will scream blue murder that the figures and or testing is fixed if they come out at the bottom or not at the top.

However all the benchmarks I have seen (these are internal as they usually include compilers where you can't publish benchmarks) are normally using not just the standard benchmarks but also a number of large reference programs. Often from customers.

This is very true. I always tell people to do this.

Never going to happen.

Well I have several sets of tests from 12 and 6 months ago. None of which I can publish or name of course as they are a confidential and b contain benchmarks for compilers you can't benchmark....

It is increasing to see trends from one set to anther.

Not according to the information I have.

This is true. The smaller the MCU the more specific the compiler has to be.

Yes it is a generic system not really suited to embedded systems

That is not the reason. C99 was nto wanted or needed by the embedded community. Hence the reason why no mainstream embedded compiler supports C99 12 years on. BTW having a C99 switch does not mean it supports C99!

As mentioned elsewhere the ISO C committee are moving to make some C99 features "optional"

It is not the quantity.

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

In message , John Devereux writes

Not all of them.... tends to be a N. American thing.

I thought there were various different sets of libraries for gcc?

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

Well I am not at all an expert but yes, in principle. Here is my understanding:

1) There are things that appear as "inline" code, like basic integer arithmetic and conversions between one integer type and another, These are actually the vast majority of many projects embedded code and what I mean when I say there is "little difference in compiler output". 2) Then there are functions that cannot easily be expressed inline, like floating point versions of basic arithmetic functions (on an integer processor). I think these are in gcclib, part of the FSF gcc distribution. I believe there are a couple of implementations that in principle you can choose from when building gcc yourself. But the distribution will normally set this for you automatically according to the selected target. 3) Then there are things like trig functions, these require explicit linking with a math library, for example the libm component of newlib.

I think some of the commercial gcc vendors provide their own implementions of 3). Don't know about 2).

--

John Devereux
Reply to
John Devereux

I am sure there are /some/ vendors who are honest enough to admit that they don't compete on code speed or size, but on other factors (support, price, etc.) :-)

I believe that if you are going to publish something, you should do your best to make it fair and impartial. Obviously this is very hard to do in comparative benchmarking, so most people should refrain from publishing such information - especially if they are in a position of authority.

But if you /do/ want to publish benchmarks, you should start off with some clear objectives - define the code you want to run and the target, which should ideally be a freely available simulator. (Free, so that anyone can duplicate your results if they have the compiler.) Your code should be freely available. Vendors should be contacted and asked if they have suggestions or comments. Multiple results should be published, such as optimising for size or speed, using "standard" compiler flags and using "specialised" compiler flags (perhaps suggested by vendors), and maybe also extra results for code variations that have the same resulting output but smaller or faster target code. There must be enough information and code that anyone can repeat the tests, and vendors must have the chance to make comments.

Obviously for internal benchmarks used by vendors, the rules are totally different - as is the purpose of the benchmarks.

Maybe not, but it would still be nice to see!

One big hurdle here is who is going to pay for it all? Doing a proper job would be a lot of work, even for just one target, and it must be kept up-to-date.

While I understand exactly where you are coming from, and why you cannot give any more information, any comments you make about compiler performance is about as useful as a new claim for cold fusion. Unless you can give concrete names and numbers that others can reproduce, it's just marketing waffle - not science.

I base my claims on my understanding of compilers, my fairly extensive use of gcc on many architectures, and my usage and testing of various commercial tools against gcc on several architectures. But like you I cannot give details, and so my claims are anecdotal (they are not marketing waffle, since I'm not selling anything).

But like you, I recommend people try out different tools before deciding. It is also important to look at other aspects of the tools - object code size and speed is only one point, and it is not often the most important issue when picking a tool.

gcc is a generic compiler system - that's true. It supports dozens of hosts, many dozens of target processors, and several languages (C, C++, Ada, Objective-C, java, go, etc.). That all has advantages and disadvantages.

But it is /not/ true that it is "not really suited to embedded systems". It was not originally /designed/ for embedded targets, but it works extremely well in that area. Much like the C language itself, really.

It is important to make some distinctions here - there are C99 features that few embedded developers have any use for (and some that no one has much use for). But there are others that /are/ useful. This is why very few (if any) toolchains can claim full support for C99, but most support a fair number of the C99 changes. The same applies to the C++ standards, and the work-in-progress C1X standard. gcc has always been a forerunner here, with early support for at least the most useful new language features. Some commercial toolchains are also good at supporting new features - others are pitiful.

Here are a few C99 features that /are/ useful to embedded programmers (of course, none are /needed/, strictly speaking):

  • inline functions
*
  • // comments
  • designated initializers and compound literals
  • mixed declarations and code

Other features such as improved wide chars, non-ASCII identifiers, and variable-length arrays are of less use.

While it is certainly true that all developers are not equally good, you are on /very/ shaky ground if you want to imply that the programmers working for Keil and other commercial toolchain developers are somehow "better" than those who work on the development of gcc. You have no possible way to justify such a claim - even if you have evidence to back it up, which I sincerely doubt, there is no way you can publish any such evidence. Let's just say that you personally think that certain unnamed commercial toolchain developers do a better job at compiler development than gcc - and leave it at that.

Reply to
David Brown

The large 16 and 32 bit systems of yesteryear are the microcontrollers of today. The assumption of 16 and 32 bits ints, and efficient access via pointers and stack crippled 8 bit CPUs. But it works extremely well on an ARM say. Furthermore new parts are usually explicitly designed with C in mind. So C actually becomes a *better* fit as time goes by.

And so, by extension, does gcc, even though it too was created for "large" (non-embedded) systems.

[...]
--

John Devereux
Reply to
John Devereux

There are. I alway use the C library which comes with MSPGCC (MSP430) for ARM projects. It is very small. If space constraints are really tight I switch to a small implementation of printf/sprintf.

--
Failure does not prove something is impossible, failure simply
indicates you are not using the right tools...
nico@nctdevpuntnl (punt=.)
--------------------------------------------------------------
Reply to
Nico Coesel

Trouble is most of the world don't work that way.... :-( Also it is a moving target. Compilers evolve all the time.

Besides many choose (initial buy) price over anything else anyway.

No one trusts anyone to do that. They are also only going to shot off favourable benchmarks.

Because the benchmarks I get to see on occasions are for internal use by the engineers they are very fair and impartial down to noting exactly which versions and settings for each compiler.

The second you publish those some one will complain that the settings should be changed on their compiler for a particular test etc. (GCC people are some of the worst for this so no one bothers)

Sorry I have to ask marketing/legal/corperate before I answer that.... :-)

Well Keil tended to package most of the standard (drhy, whet, sieve) benchmarks with their compiler and they all fitted into the eval version. However the meaningful "real World" benchmarks are large amounts of real code and not something you can give away.

OK... So you are going to give up a few weeks of your life to do it? Free of charge? Doing this is not a simple task.

Yes

Dream on.... may be one day.... Perhaps a job for you when you have retired? If I am still sane I will give you a hand.

Absolutely. This is why there is no test suite worthy of the name for GCC. It takes an enormous amount of disciplined and diligent work.

Agreed. I know what I have seen but the minute I names names (or numbers) it is the last time I get any information on anything.

Agreed

That last comment is disingenuous.

Absolutely. Always have done. There is no "best" compiler in a practical sense. A lot depends where you are coming from and where you are going to.

Agreed. Support is another. Also what other tools it will work with and what functionality you need. For example it looks like the FDA (US medical authority) bless them have made up 30 years and now are likely to want not only static analysis but MCDC code coverage on the target.

SO the tools you will need for that are very different to a 4.99GBP mass produced air freshener.

Agreed.

I would disagree on that. The technology it uses is very old and not really suited to many MCU's

Agreed.

They are getting there.

I don't keep much of an eye on C++ at the moment (too busy)

Tell me about it... Causes us a lot of fun in MISRA with "effectively bool"

The problem with // comments is not the // but how the other end of the line was handled. However many if not most C90 compilers had that before the standard did.

Yes.

BTW I was arguing for the last decade that the ISO C panel should only add in and regularise things compilers were actually adding. On the grounds that if a feature really was required then compiler makers (commercial or otherwise) would add them as there was a real market.

Hmmmm not sure there. I knwo a lot of people doing embedded systems with screens and multiple non A-z alphabets.... Chinese, Arab, cyrilc etc all on the same device and programming with non-European keyboards.

Well I am never a fan of variable memory allocation, variable number of function parameters and variable length arrays in any embedded system.

Better no. There are some good ones doing Open Source. But then again there are some appalling ones. Everyone plays. The commercial (closed shop) development teams can keep a standard and have a much more controlled process.

That is true. However it is like playing bridge where the Gcc is the dummy hand. That is open. So the commercial people can see what is happening in the GCC world but not the other way around.

I would say most do. The reason is simply the practicality of running a large project. The commercial teams have a very rigorous system that must be used and they are in full control of it. This in itself makes a big difference.

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

No, because they are for internal use the benchmarks you see are /not/ fair or impartial. They are used by companies to find weak points or potential improvements in their own tools. A vendor should not care whether their competitors produce faster or slower code - except that if the competitor produces faster code for a sample, then they know their own compiler has scope for improvement.

Clearly the internal comparative benchmarks will have full information about the tools used, the settings, etc. But there is no interest in being fair or impartial - they are designed to show specific issues, not any sort of general objective grades. In contrast to marketing benchmarks, internal benchmarks will pretty much ignore cases when their own compiler is the best - but that is bias in itself.

The second you publish them, the vendor's own marketing department will throttle their engineers for releasing the data.

Ah, you are replying before reading my whole post :-)

I'd enjoy doing it - but I know I'd never get the time.

And by the time I retire, the commercial tool vendors will be out of business and we'll all be using gcc (or perhaps llvm). We can dig up this thread in 30 years time, to see if my prediction was correct...

No, the last comment was correct. I don't mean that my opinions here are worth more than yours or anyone else's - just that I have no commercial motivation.

I've heard that claim many times, with nothing behind it to suggest what is "old", or what "new" technology is available as an alternative, and certainly with no indication that there is actually anything wrong with the "old" technology.

I'm sure there is plenty of "old" stuff in gcc - but there is plenty of "new" stuff too. And I would be very surprised if the situation was any different with the established commercial players.

I see MISRA as a handicap to good embedded programming. 50% of its rules are good, 50% are bad, and 50% are ugly.

If your project requires MISRA, you use MISRA - and you use tools that support it and enforce it. If your project does not require MISRA, you are better without it.

I agree that the standards should only cover useful features that are also practical for tool vendors to implement. I haven't noticed it in C standards, but I think there are a few things in C++ standards that are actively removing or changing old standards features (one example being the deprecation of "throw" exception specifications).

Yes, but how often do you want to use non-ASCII characters in the program's identifiers?

Anyway, I did say "of less use", not of "no use" :-)

I've seen commercial development, and worked with programmers that are professionally employed (though not in the compiler development business). Most programmers are below average, and some are absolutely hopeless. As far as I can see, this applies to commercial closed source development just as much as to open source development - I don't imagine compiler development is any different.

And gcc development is very much a controlled process. Open source development does not mean anarchy, just because it is a different development model than you are used to.

Of course, some parts of gcc internals are so obscure that they are unintelligible to outsiders...

Would this be the same reason that Internet Explorer has such a better security reputation than Firefox, or that Windows is the preferred platform for supercomputers? /Some/ commercial development teams have rigorous systems, top-quality project management, tight testing regimes, etc. So do some open source development teams. Most software development is a bit more ad hoc, and for many groups it seems that good management just means they have a better idea what their bugs are, rather than fewer bugs.

You should also remember that there are hundreds of commercial embedded toolchains out there, for many dozens of currently realistic processors. There /are/ some vendors that are top of the range in terms of their development processes, their quality control, and the functionality of their tools. But you seem to think that /all/ commercial vendors are in that category, which is simply not true.

Reply to
David Brown

They run the same tests on all of them. Then they look at the results. The benchmarks are fair and impartial it is pointless doing them otherwise.

That is why they run the benchmarks.

Really? Then you are talking to different development teams to me then.

There aren't any... there is where we came in. A lot of compilers do not permit the publishing of benchmarks

Then ALL benchmarks are pointless.

The Engineers don't release them. Why would they?

Me neither but it would be good to have a go.

OK. I'll buy.

Neither do I in this discussion.

The problem is the commercial compilers are not going to give any hints to the GCC people....

It depends which targets you are talking about.

:-) It is guidance not a religion.

Maybe. What would you use in it's place?

Yes.... If a tool vendor can not implement a "cool" idea there is not point in having it.

Ask the Chinese, Russians etc

It depends where on the planet you are sitting.

Apart from APL I thought all computer languages were written in ASCII. I wonder what will happen when the Chinese invent a computer language.....

That is true. :-)

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

It can be time consuming, but it's just one of those things that, imo, must be done. Perhaps good for the soul of the machine, whatever. Just hacking what's there is rarely good enough and you are reminded of the fact every time you look at it.

The fact that peripheral definitions are in the header files has no effect on the compiler unless you actually use them. As you can't really tell what will be needed in the future, or who will have to take over and maintain the code, it's probably best to include everything, tedious though it may be. To me, creating the header files is part of the overall process that makes you slow down a bit and think about what you are doing, rather than diving in and designing the system with the editor :-).

The eventual aim here is to have generic peripheral libraries that will suit any mcu, but it's no easy task. To get anywhere near the goal, you can use only a generic subset of available functionality, like ports, timers, uarts etc. Though it's possible to have generic drivers for port, timer and uart functions, there's always the register level programming that is difficult to abstract, so you always need some sort of driver / hardware abstraction layer at the lowest level. One thing that can be usefull is to make things like device init data driven. eg: Array of structures containing register address or offset and associated value can simplify things. Code to drive it is trivial as well. Just edit the data for a new peripheral.

The cmsis library looks interesting, but as you say, a reference and doubt if any of it will get used as is...

Regards,

Chris

Reply to
ChrisQ

Chris H wrote:>> with the "old" technology.

Sorry to but in here, but I wouldn't have thought they would need it. fwics, gcc and other open source has some of the brightest minds in the business working on the code and includes academic sources from all over the world.

I would be more curious about how much of the open source effort goes into commercial product without due attribution and lack of conformance to the open source rules. There have been cases in the past of commercial vendors having their knuckles rapped over this, but suspect that it's only the tip of the iceberg...

Regards,

Chris

Reply to
ChrisQ

I have used the "generic" approach, it all sounds great in theory but I am not sure it added much for me in the end. One thing that has worked well so far is a set of generic "port I/O" macros that abstracts single pin I/O operations. So I can write some bit-banged chip driver once for multiple MCU families. And I have various hardware-independent libraries for CRCs, bit manipulation, graphics, fonts and so forth that are invaluable. But "hardware-independent hardware drivers" - not so much. It is all so interrelated, and on a modern microcontroller there are so many options 95% of which I will never use. It is not worth spending time writing functions for every possible feature "just in case".

--

John Devereux
Reply to
John Devereux

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.