What micros do you actually hate to work with?

All too true. I get a little depressed everytime I see complaints about the complexity of make, demanding that an IDE hide all the build details. And every compiler including it's own incompatible IDE, at least I can ignore those and use my standard editor and makefiles. But when I think of the effort wasted on the IDE that could have gone into the compiler/linker instead...

Yeah and I used to walk 10 miles to and from school everyday in shoulder height snow ... :)

Robert

Reply to
Robert Adsett
Loading thread data ...

Actually you do only half of what the stament you quote does, it writes a 0x3A0D, the line also specifies the written size and register postincrement by 2. Your C translation will not work.

But I never said VPA takes less characters to type in than C to write a program. It is just a lot less restricting a language, and is sufficiently compact to be convenient. This makes it more efficient. Granted, programming is for people who have a linguistic talent. HLLs make it easier for those without it, no doubt about that.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

Wilco Dijkstra wrote:

Reply to
Didi

Hello David,

Yes but I have no choice because the target device doesn't offer a HW multiplier and each of these routine must finish within 30-40 clock cycles. Increasing coefficient granularity is one of the trade-offs in such cases. That is why structures such as wave digital filters are used on those projects. There you can get away with coarser coefficients because they will only cause a slight penalty in the stop band and this can be muffled by a pre-filter (and then running the steep filter after a decimation).

--
Regards, Joerg

http://www.analogconsultants.com
Reply to
Joerg

Hello Ulf,

:-)))

However, seems like the use of intrinsics can put a serious crimp into C portability. Then again, us HW guys are used to that. Portability between CAD programs is de-facto non-existent, despite all that EDIF "effort" by that industry. The only thing that's really portable there is the end result, the Gerber files.

--
Regards, Joerg

http://www.analogconsultants.com
Reply to
Joerg

Actually what you said (see above) is that C was more difficult to understand "a hieroglyph based language" as opposed to "an alphabet based" language. His example was a pretty effective counter to that claim IMO. Your correction actually amplifies his point.

No language is clear to those not versed in it.

Robert

Reply to
Robert Adsett

The following is a comment from our core compiler tools sources. Constant multiplies in processors that do not have a mul instruction are using Horner polynomials. This is being done on some of the PIC and many of the 6502 based processors used in high volume consumer products.

{ Inline multiply implemented with Horners polynomial decomposition. This requires no temporary space and only uses shifts and adds }

We divide with inverted multiplies where we can on processors where it saves cycles.

w..

Reply to
Walter Banks

I never said that.

Another way of putting is to say it is less restrictive. A somewhat metaphoric way to put it is to say that in C you have to put together some readily available pictures while in assembler you can write text. If the text based language is good enough (some are, others are not), this makes the writer more efficient. Another methaphoric explanation of this is to say that using an alphabet takes more literacy than copying pictures, so less people would be good at it, but those who are are more efficient. And to go one step further on that, not all literate people are writers, and even less of those are successfull...

Indeed. And I agree that it takes less effort to become versed in C or Pascal or Basic than in assembler, I just claim that someone who is good enough at a good enough assembler/assembler like language (my particular exmaple was my VPA, see

formatting link
, I put some examples there explicitly for that thread) is way more efficient than someone using C for projects taking >= 2 weeks to program. For shorter, C might be more efficient, e.g Jon mentioned that C is a lot more efficient for expressions. And then again, if you have to really optimise an expression, counting every cycle (I had to do that not so long ago), using all architecture specific details which could be used, C has again no chance. Even in VPA you may well have to use architecture specific lines...

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

Robert Adsett wrote:

Reply to
Didi

Bad choice of words on my part... I meant: intrinsics look and behave like normal C functions, and you can replace them with real functions. __nop is a great example indeed, and many compilers support it.

It is unlikely intrinsics will soon become part of the official C standard - just like many other defacto compiler features...

My definition of an intrinsic is it can be replaced by strictly conforming C code, ie. they behave like any other function. Of course you could make non-portable intrinsics, but those are very difficult to implement in a highly optimizing compiler (like inline assembler...).

Intrinsics are portable like this:

inline void nop() { #if USE_NOP_INTRINSIC __nop(); #else __asm("nop"); // or something similar #endif }

If the compiler supports that datatype absolutely. Not all compilers support long longs for example (and they are not standard in C++), so for portable code you sometimes have to emulate them.

Wilco

Reply to
Wilco Dijkstra

How about shift right by 7 then? If you have optimal code sequences for every possible special case, apply them consistently and maybe even use instruction selection macros then you are atypical indeed - almost a human compiler! Only very experienced assembler programmers write smart code like that consistently.

It definitely is. It is also a good way to spend a lot of time on very few instructions... I once spent a whole day staring at 3 instructions before I got them down to 2 (this was the inner loop of a division routine so it mattered).

Absolutely.

Wilco

Reply to
Wilco Dijkstra

If it is not about compiler efficiency then what is the point according to you? Assume we have a perfect compiler, why would assembler beat C at all?

Wilco

Reply to
Wilco Dijkstra

"Wilco Dijkstra" schreef in bericht news:1VvXg.23181$ snipped-for-privacy@newsfe2-gui.ntli.net...

Isn't it annoying that the screen saver kicks in after an hour of staring ;)

--
Thanks, Frank.
(remove 'q' and '.invalid' when replying by email)
Reply to
Frank Bemelman

In what way is assembler less restrictive than C? One of its best features is that C imposes few restrictions (some would argue to few).

What exactly is the picture a metafor of? Statements? Functions?

Here you go wrong. Pictures are easier to manipulate than text and are thus more efficient (why do you think most of us use a GUI?).

Another methaphoric

So basically you're saying that assembly programmers are by definition smarter than C programmers?

Right, so anyone working on big projects should drop all their C/C++ code immediately and rewrite everything in assembler as it is so much more efficient?

I guess you just lost your last bit of credability...

Wilco

Reply to
Wilco Dijkstra

Computer users do use a GUI. Programmers use text to write code.

The day may (will?) come when human programmers will be unnecessary. Until that they, programmers will likely use text. Of course, as the trend is, someone setting up a powerpoint presentation may be will be called a programmer past that day (or do we have to wait that long...).

So when did I said that. Please reread the sentence in its entity. There is nothing about "anyone" in it.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

Wilco Dijkstra wrote:

Reply to
Didi

Yes, fairly trivial. To make it more of a challenge I would reduce buffer storage by storing 4 samples in each entry and maybe do the square root calculation incrementally as samples go in and out.

Yes, as usual much of the work in programming is in the design, not the implementation. Once you have got a good algorithm you can implement it efficiently in any language. Of course when you need to change it, it starts to matter how it was written.

Wilco

Reply to
Wilco Dijkstra

Maybe you should ask Jon for the C PIC code, to try on your compilers ?

Note in that case, he was also jumping uC families, and as opcode reach increases, so will your code size. IIRC it was also the first project on that compiler and some time was spent getting to know the tools.

-jg

Reply to
Jim Granville

Some questions I'd have would be: Does his compiler permit me to specify that certain routines reside on the same 256-entry page? (I assume with appropriate linker controls this should be possible.) Can his switch() capability deal with fixed allocation size cases on 2^n address boundaries? (Because, for example, I used the ADDWF PCL,F instruction to branch to them.) Does his compiler support the use of RETLW in a such branched calls? That's just from old memory. There were other things that I'd need to review code to drag out. I did a lot of things with the paged system and status bit access and so on for this application.

Jon

Reply to
Jonathan Kirwan

Thanks for your contributions, I won't be able to respond to everything, but I agree mostly with what you have said. I am keen to see an example that shows the 6x codesize difference you experienced!

In my last job I worked on a compiler for 10 years, and in that time its performance improved by 50% and its codesize by 30%. I would call that a good improvement rate, especially considering the compiler was pretty good to start with.

For compilers just being retargeted to a new architecture I would expect the improvement rate to be much faster for the first few years as compiler engineers get up to steam. Of course, as you say, it depends a lot on how much time is invested into it and how compiler friendly the architecture is.

It is true that the underlying fundamentals of compilers have hardly changed in the last 25 years, but that doesn't mean they haven't improved a lot. Nowadays compilers can do much more advanced program analysis due to having more resources.

Modern compilers also work much harder to use the complete instruction set well, all registers and other features. In the past compilers used to have a rigid call frame for example - today's compilers have long stopped using frame pointers and related unnecessary baggage.

The first compiler I used was the ROMP Pascal compiler in the early 90's and the code it produced was absolutely terrifying. Compilers have come a long way since then...

Wilco

Reply to
Wilco Dijkstra

Jon and I have had several off line discussions on this including code examples exchanged. Jon (and others) represent a part of the industry that is essential to developing compilers and that is that group of people that completely understand an instruction set and use it.

Part of the exercise of asm in C came from those discussions. It is clear that the relationship between asm and C starts with demonstrating that C doesn't have to cost code size or cycles then the real advantages of a HLL, basically accounting and data base advantages that computers are good and humans are poorer at can be emphasized.

w..

Reply to
Walter Banks

You said "someone who is good enough at assembler". That includes a large proportion of programmers and most people working on very large and complex projects. For example the code in a mobile phone amounts to 5 million lines of C and C++ (plus 500 lines of assembler).

Anyway, the ridiculous bit is that writing assembler could ever be more efficient than writing C *on large projects*.

Wilco

Reply to
Wilco Dijkstra

The Microchip PIC can be described as a challenge. Program flow analysis determines which switch to implement. Memory management on many PIC's is expensive as ROM fills up it can reach close to 25% of the execution cycles. the same is true for RAM allocation. The MPC compiler takes advantage of RAM address aliases in code generation.

ADDWF PCL,F is an in page computed jump. For deterministic computed jumps this code works and is used by the MPC compiler for jumps that can cross a page boundary alternate sequences are used.

Literal constants on a Microchip PIC most often use RETLW depending on part we also support direct ROM access on parts that support it to access literals. Our compiler supports 7bit packed string literals in ROM on some of the 14 bit parts. The tradoff's are RETLW is usually faster the other approaches are usually more compact ROM requirements.

w..

-- snipped-for-privacy@bytecraft.com Byte Craft Limited

formatting link

Reply to
Walter Banks

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.