8051 on-chip debugging

Trouble is most of the world don't work that way.... :-( Also it is a moving target. Compilers evolve all the time.

Besides many choose (initial buy) price over anything else anyway.

No one trusts anyone to do that. They are also only going to shot off favourable benchmarks.

Because the benchmarks I get to see on occasions are for internal use by the engineers they are very fair and impartial down to noting exactly which versions and settings for each compiler.

The second you publish those some one will complain that the settings should be changed on their compiler for a particular test etc. (GCC people are some of the worst for this so no one bothers)

Sorry I have to ask marketing/legal/corperate before I answer that.... :-)

Well Keil tended to package most of the standard (drhy, whet, sieve) benchmarks with their compiler and they all fitted into the eval version. However the meaningful "real World" benchmarks are large amounts of real code and not something you can give away.

OK... So you are going to give up a few weeks of your life to do it? Free of charge? Doing this is not a simple task.

Yes

Dream on.... may be one day.... Perhaps a job for you when you have retired? If I am still sane I will give you a hand.

Absolutely. This is why there is no test suite worthy of the name for GCC. It takes an enormous amount of disciplined and diligent work.

Agreed. I know what I have seen but the minute I names names (or numbers) it is the last time I get any information on anything.

Agreed

That last comment is disingenuous.

Absolutely. Always have done. There is no "best" compiler in a practical sense. A lot depends where you are coming from and where you are going to.

Agreed. Support is another. Also what other tools it will work with and what functionality you need. For example it looks like the FDA (US medical authority) bless them have made up 30 years and now are likely to want not only static analysis but MCDC code coverage on the target.

SO the tools you will need for that are very different to a 4.99GBP mass produced air freshener.

Agreed.

I would disagree on that. The technology it uses is very old and not really suited to many MCU's

Agreed.

They are getting there.

I don't keep much of an eye on C++ at the moment (too busy)

Tell me about it... Causes us a lot of fun in MISRA with "effectively bool"

The problem with // comments is not the // but how the other end of the line was handled. However many if not most C90 compilers had that before the standard did.

Yes.

BTW I was arguing for the last decade that the ISO C panel should only add in and regularise things compilers were actually adding. On the grounds that if a feature really was required then compiler makers (commercial or otherwise) would add them as there was a real market.

Hmmmm not sure there. I knwo a lot of people doing embedded systems with screens and multiple non A-z alphabets.... Chinese, Arab, cyrilc etc all on the same device and programming with non-European keyboards.

Well I am never a fan of variable memory allocation, variable number of function parameters and variable length arrays in any embedded system.

Better no. There are some good ones doing Open Source. But then again there are some appalling ones. Everyone plays. The commercial (closed shop) development teams can keep a standard and have a much more controlled process.

That is true. However it is like playing bridge where the Gcc is the dummy hand. That is open. So the commercial people can see what is happening in the GCC world but not the other way around.

I would say most do. The reason is simply the practicality of running a large project. The commercial teams have a very rigorous system that must be used and they are in full control of it. This in itself makes a big difference.

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H
Loading thread data ...

No, because they are for internal use the benchmarks you see are /not/ fair or impartial. They are used by companies to find weak points or potential improvements in their own tools. A vendor should not care whether their competitors produce faster or slower code - except that if the competitor produces faster code for a sample, then they know their own compiler has scope for improvement.

Clearly the internal comparative benchmarks will have full information about the tools used, the settings, etc. But there is no interest in being fair or impartial - they are designed to show specific issues, not any sort of general objective grades. In contrast to marketing benchmarks, internal benchmarks will pretty much ignore cases when their own compiler is the best - but that is bias in itself.

The second you publish them, the vendor's own marketing department will throttle their engineers for releasing the data.

Ah, you are replying before reading my whole post :-)

I'd enjoy doing it - but I know I'd never get the time.

And by the time I retire, the commercial tool vendors will be out of business and we'll all be using gcc (or perhaps llvm). We can dig up this thread in 30 years time, to see if my prediction was correct...

No, the last comment was correct. I don't mean that my opinions here are worth more than yours or anyone else's - just that I have no commercial motivation.

I've heard that claim many times, with nothing behind it to suggest what is "old", or what "new" technology is available as an alternative, and certainly with no indication that there is actually anything wrong with the "old" technology.

I'm sure there is plenty of "old" stuff in gcc - but there is plenty of "new" stuff too. And I would be very surprised if the situation was any different with the established commercial players.

I see MISRA as a handicap to good embedded programming. 50% of its rules are good, 50% are bad, and 50% are ugly.

If your project requires MISRA, you use MISRA - and you use tools that support it and enforce it. If your project does not require MISRA, you are better without it.

I agree that the standards should only cover useful features that are also practical for tool vendors to implement. I haven't noticed it in C standards, but I think there are a few things in C++ standards that are actively removing or changing old standards features (one example being the deprecation of "throw" exception specifications).

Yes, but how often do you want to use non-ASCII characters in the program's identifiers?

Anyway, I did say "of less use", not of "no use" :-)

I've seen commercial development, and worked with programmers that are professionally employed (though not in the compiler development business). Most programmers are below average, and some are absolutely hopeless. As far as I can see, this applies to commercial closed source development just as much as to open source development - I don't imagine compiler development is any different.

And gcc development is very much a controlled process. Open source development does not mean anarchy, just because it is a different development model than you are used to.

Of course, some parts of gcc internals are so obscure that they are unintelligible to outsiders...

Would this be the same reason that Internet Explorer has such a better security reputation than Firefox, or that Windows is the preferred platform for supercomputers? /Some/ commercial development teams have rigorous systems, top-quality project management, tight testing regimes, etc. So do some open source development teams. Most software development is a bit more ad hoc, and for many groups it seems that good management just means they have a better idea what their bugs are, rather than fewer bugs.

You should also remember that there are hundreds of commercial embedded toolchains out there, for many dozens of currently realistic processors. There /are/ some vendors that are top of the range in terms of their development processes, their quality control, and the functionality of their tools. But you seem to think that /all/ commercial vendors are in that category, which is simply not true.

Reply to
David Brown

They run the same tests on all of them. Then they look at the results. The benchmarks are fair and impartial it is pointless doing them otherwise.

That is why they run the benchmarks.

Really? Then you are talking to different development teams to me then.

There aren't any... there is where we came in. A lot of compilers do not permit the publishing of benchmarks

Then ALL benchmarks are pointless.

The Engineers don't release them. Why would they?

Me neither but it would be good to have a go.

OK. I'll buy.

Neither do I in this discussion.

The problem is the commercial compilers are not going to give any hints to the GCC people....

It depends which targets you are talking about.

:-) It is guidance not a religion.

Maybe. What would you use in it's place?

Yes.... If a tool vendor can not implement a "cool" idea there is not point in having it.

Ask the Chinese, Russians etc

It depends where on the planet you are sitting.

Apart from APL I thought all computer languages were written in ASCII. I wonder what will happen when the Chinese invent a computer language.....

That is true. :-)

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

Simon Clubley wrote:>>

I do it anyway and often in my own time. It gives a headstart in terms of productivity on a project from day one. It's just part of the learning process for a new machine. I was going to say it's good for the soul, but that might be a bit too extreme :-). Put in the early legwork to save time and effort later.

Haven't used the AT91, but the CM3 loads initial stack pointer and pc from the vector table base on restart, much like the 68000 did and, imnsho, is the way it should be done. One of the things I never liked about some of the early arm devices was the flaky interrupt handling. The integrated interrupt controller and classical vector table approach makes it a much more attractive and cleaner device in all sorts of ways. Starts to look more and more 68000'ish every day, 2 stack pointers, supervisor and user mode etc, only much, much faster.

You always need to look at the vendor code to confirm what you *think* you know about the machine, but the examples are rarely usable for anything serious. More than likely intentionally simple to aid understanding...

Regards,

Chris

Reply to
ChrisQ

It can be time consuming, but it's just one of those things that, imo, must be done. Perhaps good for the soul of the machine, whatever. Just hacking what's there is rarely good enough and you are reminded of the fact every time you look at it.

The fact that peripheral definitions are in the header files has no effect on the compiler unless you actually use them. As you can't really tell what will be needed in the future, or who will have to take over and maintain the code, it's probably best to include everything, tedious though it may be. To me, creating the header files is part of the overall process that makes you slow down a bit and think about what you are doing, rather than diving in and designing the system with the editor :-).

The eventual aim here is to have generic peripheral libraries that will suit any mcu, but it's no easy task. To get anywhere near the goal, you can use only a generic subset of available functionality, like ports, timers, uarts etc. Though it's possible to have generic drivers for port, timer and uart functions, there's always the register level programming that is difficult to abstract, so you always need some sort of driver / hardware abstraction layer at the lowest level. One thing that can be usefull is to make things like device init data driven. eg: Array of structures containing register address or offset and associated value can simplify things. Code to drive it is trivial as well. Just edit the data for a new peripheral.

The cmsis library looks interesting, but as you say, a reference and doubt if any of it will get used as is...

Regards,

Chris

Reply to
ChrisQ

Chris H wrote:>> with the "old" technology.

Sorry to but in here, but I wouldn't have thought they would need it. fwics, gcc and other open source has some of the brightest minds in the business working on the code and includes academic sources from all over the world.

I would be more curious about how much of the open source effort goes into commercial product without due attribution and lack of conformance to the open source rules. There have been cases in the past of commercial vendors having their knuckles rapped over this, but suspect that it's only the tip of the iceberg...

Regards,

Chris

Reply to
ChrisQ

I have used the "generic" approach, it all sounds great in theory but I am not sure it added much for me in the end. One thing that has worked well so far is a set of generic "port I/O" macros that abstracts single pin I/O operations. So I can write some bit-banged chip driver once for multiple MCU families. And I have various hardware-independent libraries for CRCs, bit manipulation, graphics, fonts and so forth that are invaluable. But "hardware-independent hardware drivers" - not so much. It is all so interrelated, and on a modern microcontroller there are so many options 95% of which I will never use. It is not worth spending time writing functions for every possible feature "just in case".

--

John Devereux
Reply to
John Devereux

I'm under the same impression. A commercial tool always has a limited budget and therefore the quality in amount of bugs and innovations of the solution is limited to what is economically viable with the workforce at hand. Opensource has much less problems with these limits. If a project is succesfull, better programmers will step in. If not, development will halt. Kind'a like evolution.

OpenOCD is a nice example. I needed faster loading for MIPS platforms so I optimized the MIPS support by approx 30%. Someone else came along and he optimized it even further.

--
Failure does not prove something is impossible, failure simply
indicates you are not using the right tools...
nico@nctdevpuntnl (punt=.)
--------------------------------------------------------------
Reply to
Nico Coesel

I rather do that as I go along. Sometimes you're lucky. I like the way the headers files are organized for NXP's LPC2000 series. Except for some inconsequent naming conventions between different family members. However, for the LPC1000 (Cortex) series they used structs which I don't like (too much can go wrong) and it killed existing code. I ended up converting most of the LCP2000 headers to LPC1000.

The cmsis library is not bad as it is. Its one of the very few times I actually kept code which came from NXP's website. Keil wrote a lot of generic drivers for the LPC2000 series but its all a bunch of incomplete, overcomplicated and flaky crap.

--
Failure does not prove something is impossible, failure simply
indicates you are not using the right tools...
nico@nctdevpuntnl (punt=.)
--------------------------------------------------------------
Reply to
Nico Coesel

The support functions in libgcc are very easy to replace. The compiler-provided versions are weakly linked, so a strongly-linked replacement function will simply override the default one. The only real problem is getting to grips with the GCC instruction pattern names so you know which you want to replace.

-a

Reply to
Anders.Montonen

Cool, I didn't know that. So if I found a particular function was too slow, I could just put my own version in my code and it would get used instead.

--

John Devereux
Reply to
John Devereux

@Chris:

I'm not slaggingat Keil. I said: "The Raisonance toolkit has the same level of performance/features, but has excellent support and is less expensive !" Keil is a very respectable Company, I don't want to bash them. What I meant is that there are some limitations in their model.

Bruno

94
n
e
Reply to
Bruno Richard

In message , Bruno Richard writes

That I don't believe. The Keil system supports virtually all the 8051's out there. It's performance as a compiler I have not seen beaten anywhere. It's Simulator is generally accepted as the best in the business.

Yes it costs less.

What limitations?

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

Exactly. It's also an easy way to take advantage of extra features in customized chips (eg. division peripherals for CPUs normally using software division).

-a

Reply to
Anders.Montonen

In message , Albert van der Horst writes

From being involved in independent verification of compilers. Yes. We were discussing compilers.

Then your definitions are wrong :-)

--
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\
\/\/\/\/\ Chris Hills  Staffs  England     /\/\/\/\/
\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/
Reply to
Chris H

Your suggesting in the background that a commercial project is better in rigorous control. A dogma.

This is simply not true by definition. Have you ever looked into Debian? That may be the largest project in the world barring military ones. I've never worked in any commercial environment ever, with such tight effective control as Debian. Bureaucratic burdens, yes. Effective quality, no.

Groetjes Albert

--

--
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst
Reply to
Albert van der Horst

Perhaps you could provide evidence to explain why, otherwise, it's just another opinion :-)...

Regards,

Chris

Reply to
ChrisQ

I first got turned on to Linux in the mid-1990's. I was looking around at various distros, and finally settled on Slackware, because I liked the name - hey, I'm a Slacker, it's only appropriate, right? ;-P

I've been 100% satisfied ever since.

Cheers! Rich

Reply to
Rich Grise

Dear all,

thanks so far for contributing to this fruitful discussion. I finally bought the MDE 8051 Trainer Board populated with the MAXIM DS89C4xx 8051 derivate. It features the ability to being programmed via HyperTerminal by means of a hex file. Since this is my first encounter with microcontrollers, let me ask you for advice how to generate a hex file for this device. Is there any freeware or open source tool for this purpose of how should I proceed? Thank you.

"Michael Kellett" schrieb im Newsbeitrag news: snipped-for-privacy@bt.com...

Reply to
Schueler

I don't want to start another battle here about terminal emulators. Everyone seems to have their favourite - but there is a solid consensus that HyperTerminal is the worst option. I'd recommend you get Tera Term Pro - it's free, and works well. If you want to know what's wrong with HyperTerminal, or what alternatives there are, /please/ search the comp.arch.embedded archives before posting anything on the subject. And if anyone else wants to recommend a different program, and really feels it is worth the time and effort to go through the subject /again/, please start a new thread.

As for how to generate a hex file, you get that from your compiler. The top-class compiler for the 8051 is Keil, with a matching price. There are a number of mid-level commercial tools at varying prices and functionality (in particular, they vary as to which 8051 devices they support directly). Then there is the open source SDCC. My understanding (without having used any of these tools) is that SDCC generates good, sensible code, but is not nearly as clever as Keil at re-using and merging memory. But then, if you were interested in writing efficient code that runs quickly, you wouldn't be using an 8051 in the first place.

Be prepared for another battle here, by the way. There will be people who reject the idea of using anything other than Keil as heresy and false economy, and others that recommend different tools. After all, if a tool is good enough, then it is good enough - you don't need the best, you only need something that will do the job required.

Reply to
David Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.