What micros do you actually hate to work with?

You are entitled to your personal beliefes, be my guest.

This is one kind of compliment I have received for my work over the years. Another - related - one is the hanging jaw when people like you get it that I am right. I have received that more than once as well.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

larwe wrote:

Reply to
Didi
Loading thread data ...

This isn't about me. It's just that I can only speak from my experience. So I use my experiences to tell my story. It's up to you to interpret it. But they are _real_ experiences, honest ones. That's their power and their weakness, too.

I think the need for assembly has changed a great deal. In fact, the entire landscape has changed. And not just the hardware and software tools. But you have programmers who choose to become programmers simply because they did a "toss up" between becoming an accountant or a programmer and programming came heads-up in the toss or because it seemed that there might be a little bit less stress in the job, etc. That wasn't the case when I was learning it. People going into it were there for reasons that reached deeply into their souls and who they were and, for most perhaps, there really weren't many viable options. It wasn't some kind of abstract job choice. The tools available, the IDEs, wizards, the availability of things like VB where you can just drop down a pre-made widget and add some small bit of easy code behind and never even have to worry about data integrity because all of the single-threading of your code is prehandled for you, has really opened up the profession for people who could not ever have considered this seriously, beforehand.

This has meant that those who are deep as well as broad in this field are fewer between. The field is opened to so many more now and that's because of all the successes of those who've explored new ideas and paved the way beforehand. But I think of embedded programmers as not the base of that pyramid you speak of, but as closer towards the top of it. We need to understand hardware, numerical methods, mathematics, signal processing, physics, chemistry, sensors and transducers, closed loop controls, dynamical systems, as well as usual spate of programming skills.

Maybe I'm wrong about that.

In any case, I don't know where I've said anything that your comment takes issue with. Maybe we agree on the points I made. But I can't tell because you didn't address yourself to any of them.

It's a fact that I've actually had one chance in my professional life to reproduce two exact systems, one in assembly and one in C. It's a fact that in doing so, I took about the same time. It's a fact that the compiled result was quite different. It's a fact that I was about the same person both times as the development points were only a year apart from each other and I was long skilled in both assembly and C both times. It's a fact that this means nothing about what others may do, but it does at least drive a small wedge in some of the arguments I've seen floating about here. Whether or not my single experience says much in general is another issue.

But at least I can actually speak from personal experience about an actual case study. I haven't seen any of that here, to date. No one seems to have been through my experience. So at least I have something to contribute here. I'm also willing to offer and participate in some specific coding challenges so that others can see what an experienced assembly coder can do, even when forced and hog-tied into C's paradigms of functions and parameters, etc. Even with all those boundaries imposed, restricting the ability of an assembly programmer to make meaningful choices, it's still the case that in some small cases compilers simply cannot perform the kinds of topological reformings that an assembler programmer can achieve. I think Walter will remember well one such example we discussed.

However, I will say that I deeply respect all of Walter's comments and listen to him, intently. He knows a lot of things I don't. But I have a lot of application experience, too, and some modest toe-dipping into developing compilers and DAG and basic-block optimizers. So I have some perspective. But just one person's.

So why the sour comment? I'd have thought any rational person would embrace the discussion of a real world example case. There seems so few of them, you know.

Jon

Reply to
Jonathan Kirwan

2K to 12K? what went wrong?

Where did the compiler go wrong? I have hear the Microchip compiler is not the tightest, but 6 times bigger? The PIC16 reports in Words, the PIC18 in bytes, but that still puts you at 3 times bigger.

my 8051 experience, while I was learning C was maybe 20% more. The code still fit into the original space. With that kind of hit I would have given up on C long ago.

Reply to
Neil

Yes portability between different CPU architectures is a good argument for assembler ;-)

Reply to
Ulf Samuelsson

C predates the x86. Remember HLLs where made by ASM programmers who did not like the limits of ASM. Remember their memory was more limited than some low end single chip micros are today.

Reply to
Neil

Perhaps true in an absolute sense (if early internal development of C counts), but the 8086 and the first edition of the K&R C book were both introduced to the public at about the same time (1978). There is much in the 8086 architecture that is more Pascal-flavoured than C-flavoured, such as the full-strength subroutine call instructions, and segmented memory architecture. It isn't terribly likely that C was an influence (or at least not a strong one) on the design of the 8086. But I'm surmising: I wasn't there.

That's not strictly true, either, unless you consider effort reqired to produce code to be a limit. Fortran was mostly developed so that a wider pool of people could program the early computers. Ultimately HLLs produce the code using the same alphabet of instructions as assembly language, with perhaps a restricted vocabulary, or use of idiom. So the functionality limit is the other way around. [Re: idiom: you can't write a multiple-entry-point subroutine in C (or most HLLs), but this is/was quite common in assembly language.]

Sure. Most of them had disks and other on- or near-line storage, though, and people were quite prepared to wait for multi-phase processing to happen, mostly through overlays or batch programs.

--
Andrew
Reply to
Andrew Reilly

Yes. Partly because I was able to use special idioms for a state machine in assembly that depended on exact, fixed-sized short bits of code. In C, of course, they weren't able to be controlled in that way and secondly the mechanism I used simply wasn't within the capability of the compiler using switch() statements. I could provide more detail, but that was one of the larger parts of the problem. There actually are many others, though. It was interesting to look them over.

I think C is great, actually. And in a lot of cases, it does darned good. And I have no qualms about using it. I just provided the only case I had, where I actually had an experience. It's likely that if I had other experiences like this on a variety of different micros, that some of them (probably most of them) wouldn't be so bad. This PIC case had some areas in my assembly code that no C compiler could consider approaching well, because (1) the PIC16 is a nasty bastard as far as C compilers go and (2) because I was able to imagine how to take advantage of some of its weird paging mechanisms (actually, a few of the ideas were already pointed out in Microchip examples I'd read) in ways that no C compiler could productively use without a lot of effort by the writer(s).

Also keep in mind that I came from a time, almost 35 years ago, when I was involved in writing an entire timesharing operating system for 32 users, providing 10k space for each user, complete floating point including transcendentals using Cheby-methods, a bevy of commands (save, run, etc.) and including a form of both pseudo-compilation and interpretation for BASIC and general assembly support. This on 16k with user response times to commands typically about as fast as you could enter them.

This means one learns whole new ways of thinking and imagining about how an application is put together and I'm just about dead certain that it would have been impossible to use a C compiler on that system

-- partly because C didn't exist then, but even if Walter himself were to do one for this CPU and if some of the better C coders got together on this I'm pretty sure it couldn't have been done anywhere close to the limitations we faced there.

But if you are interested, I'd be happy to post that very short C routine for folks to try their compilers on and to then explain why no C compiler around can possibly turn the routine around as a human can in order to achieve much better, by hand. It makes a clear case. However, as I'm sure others will point out, it won't represent the typical usage across an entire application -- the expression of some things are pretty much the same whether coded in assembly or C on most micros. I'd use switch() case as a good example of this kind of thing where C is pretty darned good, except that the PIC16 actually does provide an assembly-only mechanism for it that C cannot easily reach, so it would be true for many, but not all processors. I'm not sure if there are any cases true always, where C and assembly would always be neck and neck. But I believe that broadly speaking it is true for many practical things.

Like a lot of things, C compilers provide a floor of code generation below which programmers cannot sink. And it's a pretty decent, high level floor. You can pretty much rely on good results, most of the time, using commercial C compiler tools and GNU. And as Walter points out, there are some things that a compiler can do for you (it has perfect memory with near-instant, flawless recall and can do the same thing over and over and over with just the same fidelity the last time as the first time) that even the better assembly programmers struggle to achieve. No question.

One thing that is difficult in writing assembly is expression optimization. I may have coded some expression and associated subexpressions in assembly, found common factors and extracted them, etc., and then suddenly find that I have to recode them in some significant way because there's been a change that, in C, would be considered "no big deal." A C compiler will eat expressions and easily walk around finding subexpressions, promoting them out of loops, tracking down useful strength reductions and so on, like a bulldozer that never stops. Nice. And that's only one minor thing -- there are many, many others that are difficult that some C compilers sporting good global analysis (stuff beyond basic blocks and DAGs) techniques that are hard to reach for us humans.

There are lots of excellent reasons for compilers. I mean, heck, if there weren't I don't suppose compilers would ever have been written at all. In fact, quite the opposite is true -- there were a lot of driving reasons for developing languages and compilers for them and even when memory and CPU time was orders of magnitude scarcer than it is today, they still were very useful tools. That fact is made plain and manifest just by all the different non-assembly languages in the world and the number of truly great applications written in them. It couldn't have been done without them. I am deeply indebted to them and to those who helped bring them to the rest of us.

On the other hand, I can look at an algorithm and see ways to invert, turn inside out, or otherwise find nifty new topologies that a C compiler would never see. And I don't mean comparing apples and oranges, where I chose some better topology for assembly just to make a point. I'm talking about cases where the obvious and correct way to write the code in C has a non-obvious but equivalent assembly coding that is much better when you look at it and where you would have a hard time (read: impossible) expressing that new form in C even after you see it.

C compilers have a great deal of power, but to gain that power they also set up paradigms they must live within. And commercial C compiler writers spend a lot of their time doing things _other_ than writing crafty C compiler code generators -- they have to deal with fonts, colors, print previews, docking toolbars, wizards, and gosh knows what else. That stuff sells, but it takes the hogs' share of their coding efforts these days. There are techniques well documented in detail in old compiler books I have on my shelves going back into the 1970's that still are only found in rare compiler efforts, if at all. Most compiler writers do the same kinds of time vs value balancing act that all of us do and the result is that C compilers today do very little today that wasn't already mainstream stuff in compilers existing 25 years ago. Despite the fact that there have been a lot of nifty Ph.D. theses since then with all manner of good ideas yet to be implemented. People don't demand them. But they do demand better IDEs. So guess where the time goes.

So C compilers provide a very good floor. And that's enough for most folks. It's enough for me, too, most of the time these days. I use assembly coding mixed with C a lot. (Rarely just C only, except for trivial programs, as there are usually at least a few places where assembly is warranted.) But I also do use assembly only once in a while where the application demands it for some reason or another.

I write on this simply because I think embedded programmers, some of the better folks out there in the programming world, shouldn't shun assembly completely. It's an important side-arm of sorts. And I wouldn't be without good experience and training using it. Recognizing that there is a place and time for assembly, though, takes nothing away from those using C. However, some in this discussion appear to imagine that in order to build up C, they have to undermine assembly and climb on top of its dead body to gain extra height and importance. And that is really going too far, I think, and worse it is beneath them. C is great enough and doesn't need to denigrate other approaches in order to raise its own stature. Too bad I have to see folks who feel that need, off and on, here.

Jon

Reply to
Jonathan Kirwan

It would be interesting to know what the results would be *now*. After all, compilers for brand new chips usually don't produce optimal code. (Decisions made based on outdated information is IMO a big problem with technical opinions. For instance a lot of the prejudice against C++ is still based on benchmarks from first-generation compilers from the

1980s. Because it costs time, effort and money the results are rarely revisited.)

-a

Reply to
ammonton

First off, I don't consider 4 years all that much time. At least in terms of C compilers. Most of them, where I've bothered looking in some detail, do little more than compilers did 25 years back. Some of the time spent in newer verions of the compilers will go into code generation, I'll grant. But not as much as you might imagine. If it were otherwise, I'd see a lot more in terms of optimization techniques than I actually do see. The years would accumulate. But they don't, not that much. I haven't checked the absolute latest of the Microchip C compiler for PIC16's, for example, but it probably still uses static compiler temporaries if I had to guess.

There were (and are) still some good reasons to avoid C++ in _some_ embedded projects. And the semantics of source code (and the necessarily generated code) isn't nearly as obvious as you may think it is. However, I also do pretty much subscribe to Stroustrup's points about holding closely to "zero or low execution time cost" semantics and a focus much more on compile-time features and less on run-time ones. So I agree, to a degree. I've given some specific examples about C++ semantics vs C here in this group a while back. You can look them up.

...

I'd like to know of other exact equivalent application case studies from the experiences of others, though. It's rare to get a chance for such comparisons and I've gotten only one in my life. But maybe there are some others. And their results will probably be quite different from my own and interesting in their own rights.

Jon

Reply to
Jonathan Kirwan

"Jonathan Kirwan" schreef in bericht news: snipped-for-privacy@4ax.com...

Well, I am too limited by my own experience. Perhaps I've seen too much ASM written by others and that were not very impressive experiences. In a similar way, you would probably not impressed by the makeovers/beefups in C that I did for those projects. Imagine what you would have thought if you were handed those project in the first place. In my opinion it is a lot easier (again

*generally* speaking) to produce acceptable results(1) in C than it is in ASM. This difference may well fade away as you move more to the top of the pyramid. Which is where you are.

If I had to advice a newbie, I would point them to a C compiler first. If they work on a target with less than 8K, I would add the advice not to use printf. I would always advice not to use floats. I would also advice not to use wider variables than really neccesairy. Following such simple rules, chances are they will produce a very reasonable result. Perhaps they find themselves in a corner at a few points in their project, running out of steam. Okay, there's the 1% where experts like yourself have less problems with.

--
Thanks, Frank.
(remove 'q' and '.invalid' when replying by email)

(1) Acceptable results:
    + Good performance in speed
    + Good performance in functionality
    + Good performance in reliability
    + Code delivered in a reasonable amount of time
    + etcetera
Reply to
Frank Bemelman

Don't you lose accuracy when you do it this way? All these bits that get dropped off the right when you do the shifts could affect the last bit or two of the answer through carries.

Reply to
David Brown

Perhaps I'm an atypical assembler programmer, but I write code like that when I need small and fast assembly, and I don't think either the use of "swap" or the xor trick for register swapping is unusual. Sometimes I think it's a shame that C compilers for small micros are so much better than they were ten years ago - writing optimised assembly is such fun! What I *do* think is rather neat, is that the compiler uses such tricks.

Reply to
David Brown

If you read my other comments, I think you'll see I agree with this. C forms a very good floor in code generation, besides its many other advantages. I use assembly less, despite being probably still a bit more proficient today, because it is fairly easy to secure good compiler tools today on a wide variety of excellent micro options and because it is vastly easier today to also secure decent C coders and for other business reasons, too.

Well, that changes the subject some. Doesn't it? Certainly if you retreat to this circumstance, I'd agree. Most colleges I'm aware of don't start their computer courses with assembly, but reserve that for the 2nd year courses. I think that makes sense.

Hehe. I've seen cases where the compiler siphons all that in, whether or not you use it. Some of the reasoning for this is the granularity of the linker and the library files. Microsoft had a seriously bad habit earlier on with some of their DOS compilers, in fact, to provide such lousy library granularity (plus weird static variable linkages that you had to keep, like it or not), that there were various text instructions running around on how to extract and re-assimilate their libraries to improve this problem.

Floats present a variety of problems. Code size being one I'd agree with. But it's not the only reason to avoid their use.

My recommendation is that embedded programmers be at least reasonably familiar with incorporating assembly as a part of a project and being able to write at least some useful code in assembly, need it or not. I also recommend, periodically, taking a look at the assembly code generated by various C code snips. Also take a look at the map files, once in a while. Be able to understand and play with the linker control files, too. Know the semantics of various types of object code segments and the way the linker can manage them. Know how to get around the tools, in other words. You won't need this often. But when you do, it will save some real time when you need it.

I agree that C is an excellent tool with usually very reasonable results.

Jon

Reply to
Jonathan Kirwan

There is another reason why beyond others I've mentioned, at least in the embedded realms, that C compilers may not advance that much. Compiler vendors are ever trying to broaden their markets, or keep pace with new micro markets as they change and not lose ground. So this means they are porting, adapting, etc. But not going around applying some hitherto unimplemented and esoteric technique. They may have gotten into the business because they like the idea of compiler code generation, but if they survive long in the business they must have learned to submerge and otherwise quell those urges.

They also have to do all the things related to good support, too. And that is nothing to sneeze at.

For companies that do NOT have to consume their efforts in those directions and/or who can afford the high cost of a large development staff, you _will_ see compiler code generation advances. For example, Microsoft did implement partial template specialization.

An example of a company that had a good team (though a bit overzealous in salting religious blurbs in their manuals) was Metaware. This was a company started by Dr. DeRemer of some LALR parsing fame around 1970 or so and also his "side kick," Dr. Pennelo. I didn't get a lot of time talking with Frank, but Tom was really wonderful and very quick and bright with compiler implementation details. They also included some really nice features (nested functions in C, coroutine support in C, and other non-standard stuff few consumers understand enough to care about) in their compiler tools. Very technical and capable, focused on compiler technology and features like few are, and... well... they are, of course, gone now.

One does what one must to survive. And for C compilers for the embedded space that doesn't mean pumping most of your time into compiler code generation the market isn't forcefully demanding or otherwise into useful but odd-lot features.

A story I remember a VP at Intel telling me illustrates this point. Intel decided, at one time in the 1970's, that they could do these digital watches better and cheaper than anyone else could. After all, the were doing memory and had this IC thing down pretty well. I'm not sure whether Intel purchased Micronta (my memory is shy on this point) or started it, but either way they owned it for a time. They did build watches and sold them. And they lost their shirts. Not because they weren't the better manufacturer or that they couldn't make them at a good price. It was because Intel's top management hadn't yet realized that the watch marketplace is a jewelry marketplace. People buy watches for a lot of reasons, but having the very best engineered digital product is way down on the list. At that time, digital was new so people did buy just to have that "digital gizmo look" and they bought for other reasons. But it was a jewelry business and Intel had no experience in it to inform them about how to compete well, to market their product to the public and establish a name, to deal with the distribution chain properly, to understand the ebb and flow during the year, etc.

I am very much impressed with embedded compiler vendors and their efforts, almost each and every one. Don't get me wrong about that. They just have to focus on business. And the business isn't high quality code generation.

Jon

Reply to
Jonathan Kirwan

It is high quality code generation, in fact. I meant and should have written, "isn't bleading edge code generation."

Jon

Reply to
Jonathan Kirwan

I'm with Jon on pretty much everything he wrote. C is the best choice for most applications, even though you can often get significantly smaller and faster code in assembly.

I find that the type of code where the choice of assembly or C makes the most difference is for small but complete applications. Sometimes particular routines can be re-written in assembly and are a lot faster (size seldom matters in such cases, as they are only a small part of the whole), but modern compilers are getting better at producing tighter code. The real difference is when you can leave behind the baggage of the C programming model, with its fixed calling conventions, fixed register allocations, and one-track procedural programming paradigm. In assembly, you are free to think as you want. You can write your routines with multiple entry or exit points, or use coroutines. You can dedicate registers to particular functions (I've written code where there is a timer interrupt every 40 clock cycles - you don't want to have to mess with context switches then).

I can fully believe the same application taking 2k in assembly and 12k in C, especially on an architecture like the PIC which has a lot of scope for assembly tricks. I've had similar experiences myself, and had to use faster crystals and significantly more current for the C version of code (slower code means less time asleep). Had Jon said he went from

20k assembly to 120k C, I'd be a lot more sceptical.
Reply to
David Brown

"Jonathan Kirwan" schreef in bericht news: snipped-for-privacy@4ax.com...

But even newbies learn quickly, once they find out that using floats slows things down. And let them scratch their head for a couple of minutes, wondering why 'if(fa==fb)' doesn't give a result as expected.

Being stubborn helps too. Although that is more about getting it done rather than saving time ;)

I have great respect for the folks who build them.

--
Thanks, Frank.
(remove 'q' and '.invalid' when replying by email)
Reply to
Frank Bemelman

Are you sure they are not just yawning?

Cheers TW

Reply to
Ted

It is a very confused market. I see a lot of the performance of silicon manufactures compilers or low cost compilers extrapolated as the performance of all compilers. Compiler development is an labor intensive detailed activity. We spend most of our time on the design and test of specific code generation algorithms.

w..

Reply to
Walter Banks

There is more than that, since we do not know if 0.457358479 is actually 0.45735847900000000000... or 0.457358479xxxxxx.

After all, many common decimal values, such as 0.1000... can not be exactly represented in a binary floating point format.

Paul

Reply to
Paul Keinanen

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.