Disobeying jet engines - why?

Hi Did,

"Didi" wrote in message news: snipped-for-privacy@d70g2000hsb.googlegroups.com... "So how quickly did you do it the first time? (copying, if you just found it and took it, does not count)."

I found the algorithm's description in "pseudo-code" and just "translated" that to C. Translating assembly wouldn't have been that much harder, granted, but it still would have taken longer and been more prone to error, IMO.

I am all for people sitting around and spending the time to write highly-optimized code in assembly when the job calls for it (and your code at

formatting link
looks pretty nice...). It's just getting back to what John mentioned about priorities -- in many cases you didn't need something with every last cycle saved, just like you don't need a "real" RTOS or even OS in many cases, and in my particular case here CRCs are computed infrequently enough that -- other than the conversion between array indexing and pointers I did for fun -- it wasn't worth the extra time and effort to write the routine in assembly.

While I'd grant that working in something like C occasionally takes more time when you need to get at the hardware, in any reasonably complex system overall there are great timesavings to be had. I'd even bet that John would start programming that 68k CPUs of his in PowerBASIC if it were available to run on them, for instance!

" Can you produce the binary (or, better, the native machine code as I did) for your code?"

Sure... this is for an AVR CPU with the IAR compiler. Notice that what really slows down this code is that we're forcing a little 8 bit CPU to calculate 32 bit CRCs, there's there's a lot of register thrashing:

98 ULONG CalcCRC(ULONG rc,const UCHAR* buf,UINT len) \\ CalcCRC: 99 { \\ 00000000 ........ CALL ?PROLOGUE8_L09 \\ 00000004 REQUIRE ?Register_R4_is_cg_reg \\ 00000004 REQUIRE ?Register_R5_is_cg_reg \\ 00000004 REQUIRE ?Register_R6_is_cg_reg \\ 00000004 REQUIRE ?Register_R7_is_cg_reg \\ 00000004 0108 MOVW R1:R0, R17:R16 \\ 00000006 0119 MOVW R3:R2, R19:R18 \\ 00000008 01DA MOVW R27:R26, R21:R20 \\ 0000000A C01D RJMP ??CalcCRC_0 100 while (len > 0) 101 { 102 UCHAR nb = *buf; \\ ??CalcCRC_1: \\ 0000000C 912C LD R18, X 103 UCHAR i = (rc & 0xff); \\ 0000000E 0180 MOVW R17:R16, R1:R0 104 i ^= nb; \\ 00000010 2702 EOR R16, R18 105 rc = rc >> 8; \\ 00000012 2C01 MOV R0, R1 \\ 00000014 2C12 MOV R1, R2 \\ 00000016 2C23 MOV R2, R3 \\ 00000018 2433 CLR R3 106 rc ^= CRCTable [i]; \\ 0000001A .... LDI R30, LOW(CRCTable) \\ 0000001C .... LDI R31, HIGH(CRCTable) \\ 0000001E .... LDI R19, (CRCTable) >> 16 \\ 00000020 E010 LDI R17, 0 \\ 00000022 0F00 LSL R16 \\ 00000024 1F11 ROL R17 \\ 00000026 0F00 LSL R16 \\ 00000028 1F11 ROL R17 \\ 0000002A 0FE0 ADD R30, R16 \\ 0000002C 1FF1 ADC R31, R17 \\ 0000002E BF3B OUT 0x3B, R19 \\ 00000030 9047 ELPM R4, Z+ \\ 00000032 9057 ELPM R5, Z+ \\ 00000034 9067 ELPM R6, Z+ \\ 00000036 9076 ELPM R7, Z \\ 00000038 2404 EOR R0, R4 \\ 0000003A 2415 EOR R1, R5 \\ 0000003C 2426 EOR R2, R6 \\ 0000003E 2437 EOR R3, R7 107 buf++; \\ 00000040 9611 ADIW R27:R26, 1 108 len--; \\ 00000042 5061 SUBI R22, 1 \\ 00000044 4070 SBCI R23, 0 109 } \\ ??CalcCRC_0: \\ 00000046 2F06 MOV R16, R22 \\ 00000048 2B07 OR R16, R23 \\ 0000004A F701 BRNE ??CalcCRC_1 110 111 return rc; \\ 0000004C 0180 MOVW R17:R16, R1:R0 \\ 0000004E 0191 MOVW R19:R18, R3:R2 \\ 00000050 E0E8 LDI R30, 8 \\ 00000052 ........ JMP ?EPILOGUE_B8_L09 112 }

"I leave it to the rest of the group to say which piece of code is easier to read and understand - and thus keep under control."

I'd vote for neither one, since for something like a CRC understanding how a CRC really works (that you're more or less computing the remainder of one verrryyyy long division using almost-but-not-quite modulo-n arithmetic, but, oh, wait... sometimes people do things bit-reversed and use various starting and ending XORs for various purposes... and then to do things efficiently you can start using table lookup to process multiple bits at once...) is a lot harder than understanding either of the pieces of code.

---Joel

Reply to
Joel Koltner
Loading thread data ...

So, we are talking about the same thing. I said I didn't want to IMPLEMENT it, becouse it's already done. Ok, you have a math ASM library, cool. A nice syntax for it means you already program your device in HLL. :-)

I only say why should we reinvent the wheel?

-- Levente

Reply to
Levente

[ BTW, the source is at
formatting link
this is the assembler list with native opcodes. ]

Actually code optimization is not higher on my list than it is for a C programmer. I did this within an hour or so, as long as it took me to read the description of the algorithm in rfc1662 and this is the first time I look back at it since (last modified 08.18.2004.. :-).

You may be able to match this in C but you cannot do it any faster, and you need not to - this is fast enough, as fast as one moves from one thing to another.

What is key about using VPA (which originates from 68k assembly having evolved since) is the fact that you choose the language level, e.g. on one line you copy a register to another and on the next line you "do" something with an object, pointed to by a5... Language level variability is inherent to any language, human included. High level programming languages put a lower limit on that which is too high - which is why programmers are slower using them. Similar to menu and command line driven systems: the former are great for first time users, the latter are a much faster way of doing work for those who do this all the time.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Joel Koltner wrote:

Reply to
Didi

I have to laugh about the bad software shops. I was part of an I.T. unit of six, we all had our own niches, mine was databases and security, we had an apps developer, a project manager and a web designer as well as a systems guy which I did part of the time.

Anyhow it was great having project management because we were getting hammered by requests for new software, new this, new that.

And when the new administration came in, guess what they scrapped first. Yup, project management. Everything went to hell after that.

Reply to
T

created C++),

get.

knot.

It's funny, C# was supposed to take care of memory management but it too drops the ball in that regard. I too prefer C to C++.

Reply to
T

The Windows asynchronous event approach is a recipe for chaos. VMS did it right. But for embedded systems, the OS will cause no trouble if there is no OS.

But this is sci.electronics.design. We program engineering apps to use ourselves, and embedded stuff inside our products. In both cases, the simplest tools generally work best, and using the hottest/latest "CS" tricks will generally (over 50% of the time, statistically) lead to disaster. The typical CS graduate will be absolutely useless in programming either of these needs.

The best programmers I know were *not* CS majors, and some never studied programming formally at all. Chemists, for some reason, seem to become good programmers.

John

Reply to
John Larkin
[snip]

My experience with my kids matches your expectation.

In addition I seem to note a correlation between language (as in foreign) skills and the ability to program.

...Jim Thompson

-- | James E.Thompson, P.E. | mens | | Analog Innovations, Inc. | et | | Analog/Mixed-Signal ASIC's and Discrete Systems | manus | | Phoenix, Arizona Voice:(480)460-2350 | | | E-mail Address at Website Fax:(480)460-2142 | Brass Rat | |

formatting link
| 1962 | America: Land of the Free, Because of the Brave

Reply to
Jim Thompson

In message , John Larkin writes

If you never do anything more complex than a toy program of a few hundred lines then you might get away with it. Even then you may have response time problems if the code is all sequential and non-interruptable. Coooperative multitasking models are a step up from that.

Incidentally for a lot of small embedded stuff I would use a state machine with a watchdog timer, but there comes a point on the complexity curve where that simple model will no longer perform adequately.

Your assertion is about the same as me claiming that there is nothing to electronics you just need a soldering iron and a multimeter.

Very probably.

Regards,

--
Martin Brown
Reply to
Martin Brown

In message , Joel Koltner writes

This really isn't a particularly good example because on a lot of CPUs there is an almost 1 to 1 correspondence between each line of C here and an assembler instruction (or two). It is only a few lines longer in x86 ASM although somewhat more cryptic and decidedly non-portable..

A much more challenging test is to create a routine to transpose an NxN large matrix as quickly as possible. When the algorithm matters using a HLL is a big help. Or try to write a really strong chess program (hint: they are now almost all coded in HLLs with just tiny parts in assembler).

Similarly using the right language for the job in hand can make a huge difference to the amount of effort needed to complete it.

I challenge the ASM will do everything you need crowd to write a QUINE (a program which when executed will output itself in sourcecode form) in their favourite assembler language. I choose to do it in one line of LISP.

((lambda (x) (list x (list 'quote x))) '(lambda (x) (list x (list 'quote x))))

It is among the shortest and a very old language dating back to the

60's.

The errata lists are surprisingly short considering how much clever stuff like register colouring and speculative execution is going on the background to keep the thing fully utilised at every clock cycle.

I remember when Cyrix commissioned a full formal proof for the design of an 8087 compatible chip their resulting validation suite found around a couple of dozen previously unknown defects in the Intel chip.

Regards,

--
Martin Brown
Reply to
Martin Brown

In message , Didi writes

This is getting ever more true. Keeping the ALU pipeline stall rules satisfied is a challenging problem for the guys doing optimising compilers for a range of target CPUs (even ones in the same nominal family).

If you are developing algorithms the last thing you need is to be hampered by the opaqueness of the representation. You might as well argue that real men should just use hex and memorise all the opcodes too.

The reason that FORTRAN was so spectacularly successful was that it made complex scientific computation accessible to anyone who could read and write algebra rather than just for a handful of white coated acolytes of the big iron god.

I find it odd here that I am jumping to the defence of C when I usually am on the other side of the argument. But there can be no denying that C is way more productive for software development than assembler.

And that strongly typed languages can prevent a fair proportion of the human error that inevitably creeps into software development. The more faults you can find by static analysis the better for all concerned.

Yeah right. I am reminded of the Klingon programmer page:

formatting link

OK. Lets see you write a Quine in the assembly language of your choice.

I chose my language carefully. It took me 1 line and a minutes typing.

Regards,

--
Martin Brown
Reply to
Martin Brown

In message , Jan Panteltje writes

And what if it is someone else's code (which is frequently the case on large projects) and almost always the case when contracting.

What a dinosaur. You might at least use the ASSERT, VERIFY and TRACE macros so thoughtfully provided in modern implementations.

So you come back to a piece of kit that isn't working and have to binary chop to find the failing line of code each time carefully reproducing whatever situation caused it to fail. Or alternatively waiting hours or days for the failure situation to happen again by chance.

The relatively simple post mortem debug tools we had in the mid 80's would log the failing address, failure code, stack top and registers. From that and the link map you could go to the exact failing line of code and knowing how it failed usually work out why.

This method worked to allow the last remaining bugs in the shipped systems to be practically eliminated as the number deployed increased and failures became rarer and rarer. You can even get MS compilers to do this trick now.

I am inclined to agree that there is too much mindlessly watching the screen in a runtime debugger with brain in neutral these days. But that is not the fault of the debugging tools it is the fault of the users.

Regards,

--
Martin Brown
Reply to
Martin Brown

Indeed. But I was talking programming, not using someone elses libraries.

No denying of what. You argue without saying what you are arguing about. Which assembler? X86? Of course C is a better language. PIC? Anything is better (as far as I have seen, that is, just looked at the PICs a long time ago). What I use is something you do not know and have never had access to. It looks much of the time like 68k assembler, but has grown further and I take advantage of a library built over the years which no compiler matches. At places it looks like higher level than C...

That's the key behind C - typing problems. Back then keyboards were far from todays (todays good ones, that is) and 2400 bpS were common. This makes the choice of the C syntax a valid choice for its time, but not a good one overall.

Well you enter an argument without understanding what the other side says. I am the last person any of these would apply to.

So did I. It took me 0 lines of programming (0 seconds). Just went to the directory and opened the source I wanted to.

Again, there is a difference between programming and using a computer. The example you chose is not one of programming.

Let's try it again, have a look at this:

formatting link

How long would it take a typical programmer to do that? I did not put DPS on it (although I could have), I only linked in some hex/dec etc. tiny things I needed. The rest is done in VPA, pretty much after John Larkins recipe (no preemptive scheduler), I was curious. I even brought that a little further, I used no IRQ at all (!). OK, the deep FIFOs the 5200 has helped a lot in that. And no, there is no waiting for I/O and stalling anything like someone in the thread suggested about Johns method, if one does not program a waiting loop in it it does not go into a wait loop... :-). I coded this within 2 monts after I had the board (first time for this processor) up and running with my debugger and under control with my JTAG tools for boundary scan (which took me another month or two, not sure now).

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Mart> In message

Reply to
Didi

On Jan 31, 1:43 am, Martin Brown wrote: [...]

Here you go in 8051:

ORG 0 ............ Stuff deleted to save space MOV DPTR,#0

LOOP: LCALL DisasmDptr MOV A,DPL ORL A,DPH JNZ LOOP MOV PCON,#NAP_TIME

Other than the shown, all the code is from the library of stuff I have already written.

Reply to
MooseFET

If you are going to call a routine which you are not also showing you might just as well do a fw file calls to print the source code from the disk.

Reply to
Richard Henry

There are two important measurements in software engineering:

  1. How long does it take for your routine to run?
  2. How long will it take for you to write the routine?
Reply to
Richard Henry

There's nothing wrong with interrupts. It would be silly to do a serious realtime system without them.

Like Windows 3?

I am talking about embedded firmware inside electronic products, not web browsers running under Unix. I've written three preemptive multasking kernals that worked well in thousands of systems; I just think that sort of glitz is seldom needed in hard embedded apps.

Is that what you believe?

I design maybe half a dozen electronic gadgets a year, most with embedded cpu's doing complex realtime stuff, and people buy them. Bugs are very, very rare; none reported in the last year.

Show us some embedded products you've done, and tell us about the OS's and architecture.

John

Reply to
John Larkin

Where do you draw the line, John? I believe you've mentioned you're using Ethernet to serial converter modules in your products, and I wouldn't be surprised if there's more code in that single module than there is in your main CPU that's running the box. These days even something as "hard" as those Cat-5 cable testers that someone asked about yesterday can have a 320x240 color LCD that people expect to "behave" like a desktop PC in terms of its usability (i.e., they want a GUI!). Another example is cell phones -- even "low end" cell phones all still have a full-color LCD, USB connectivity, and (almost always) A web browser built-in... it's not surprising that they all run Windows Mobile, the Palm OS, Symbian, or some other OS that has multi-tasking support (some cooperative, some pre-emptive), fancy file systems, semaphores, memory management, etc.

Those Ethernet converter modules are probably more likely to have bugs than your code is. :-)

Reply to
Joel Koltner

10 LIST

Cheers! Rich

Reply to
Rich Grise

On Thu, 31 Jan 2008 07:40:26 -0800, John Larkin wrote: ...

I've worked on a handful of embedded systems - an 8051 interface from scratch, an 8048 FIFO buffer, Z-80 and 80186 embedded controllers, and a 68HC11 battery charger/tester, and not one had anything even remotely resembling an operating system.

Cheers! Rich

Reply to
Rich Grise

Then run Linux. But if you're doing a tach with an LCD, don't.

I avoid products like you describe mainly because they become so software-intensive that they take forever to finish and cost hundreds of kilobucks as a minimum to engineer, and will likely need serious maintanance. Personally, I prefer to pump out a lot more high-performance but less software-intensive products and let the market decide which to buy. If only a fraction of your products are successful (and nobody does much better) survival means getting a lot of products out in the most pragmatic manner.

The decision to use an Xport for ethernet, rather than use a single CPU and include a tcp/ip stack, and an RTOS, is a way to get a product out, in spite of programmer enthusiasm to play with stacks. The customer just wants it to work.

The Xports seem solid; no problems and only minor quirks. They sure are easy to use. I wish they were a little faster (12 millisecond timeouts on outgoing strings) but that's livable.

John

Reply to
John Larkin

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.