another bizarre architecture

In article , Rich Grise wrote: [....]

Many years ago, a friend of mine made a nice little joke program. You could feed uncommented code into it and it would produce code with very nice comments. It would look at two instructions and look up phrases based on them and the random number generator. You knew you were in trouble when they seemed to be making sense.

--
--
kensmith@rahul.net   forging knowledge
Reply to
Ken Smith
Loading thread data ...

The PDP-11 was stunningly beautiful in its cleanliness and symmetry. The preferred radix was octal, and the instruction set and addressing modes fit perfectly into octal digits. I can still assemble a bit from memory...

123722 = add byte, source absolute address, destination indirect register 2, autoincrement

Its instruction set was the basis for C. 68K has more registers and is a 32-bit machine, but is less orthogonal and nothing you can easily assemble from memory. Only its MOVE instruction has the source/destination symmetry that nearly all PDP-11 opcodes had.

John

Reply to
John Larkin

It's a move machine, otherwise known as a Transport Triggered Architecture. The MAXQ is only the second commercially available move machine in existence. The first was New England's ABLE:

formatting link

The first time I became aware of move machines was from the "Ultimate RISC" page:

formatting link

(there is a corresponding page for "Minimal CISC" at

formatting link

The Delft University of Technology in the Netherlands took the concept to high performance computing and ended up with a parameterized core where you can define how many ALUs, FPUs, MMUs, scratch registers, transfer channels etc. and the CPU along with the appropriate C compiler back-end is generated for you:

formatting link

TU Delft's move machine is very heavily parallel at the instruction level. Alas ILP currently appears to be a dead end.

Maxim's MAXQ is marketed as very low digital noise (unfortunately with most cores these days running below 3V it matters less) low power chip. (but they are slow to implement significant power saving tech instead relying on the intrinsic property of the architecture).

I myself have designed and implemented (in simulators/emulators) several TTA designs. It's especially attractive for the designer since the instruction decoder is simply a demultiplexer. And of course, any left over register address can be hooked up to a peripheral.

Reply to
slebetman

The numbers I've seen are not all that low power: they have one of the worse Icc/Freq curves of the claimed low power uC.

To be fair, that does not seem to be the core's fault, but looks like the chipdesigners were a bit slack in the memory access design area.

Might have fallen into the old trap of believing their own propaganda, and as you suggest, thinking if you have a low power core, that's all you needed for a low power device.

That explains the strange opcode documentation. Sounds like this could also port to a FPGA reasonably well (if anyone would bother, as the compilers will be very locked to one implementation... )

-jg

Reply to
Jim Granville

improve.

There are two methodologies to consider:

  1. Write a lot of code fast. Once you get a clean compile, start testing it on the hardware and look for bugs. Keep fixing bugs until it's time that you have to ship. Intend to find the rest of the bugs later, which usually means when enough customers complain.

  1. Write and comment the code carefully. Read through it carefully to look for bugs, interactions, optimizations. Fix or entirely rewrite anything that doesn't look right. Figure more review time than coding time. NOW fire it up on the hardware and test it.

Method 2, done right, makes it close to impossible to ship a product with bugs, because most of the bugs are found before you even run the code. Nobody can walk up and say "we have to ship it now, we'll finish debugging later."

Method 2 is faster, too.

John

Reply to
John Larkin

The most critical rule I believe I have defined for myself is "never write at once more than you can hold in your head". After debugging a piece which is < your brain capacity, move on to the next piece :-). Obviously your "think before coding", do write comments etc. also apply.

Accepting that there may be bugs you won't catch may be for years is part of life above a given code size/complexity, as I am sure you know. I have had bugs show up for the first time several years after having written/"debugged" the code... No human creation is perfect, I guess, code included :-). But I completely agree with your point that embedded code about 5-10k lines can and must be made bug free, I have done it myself more than once.

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

improve.

Reply to
Didi

The corollary to that is "keep your functions short. One screenful is usually enough"/

Please do not strip attributions for material you quote. Please do delete quoted material that is not germane to your answer.

--
 
 
 
 "A man who is right every time is not likely to do very much."
                           -- Francis Crick, co-discover of DNA
 "There is nothing more amazing than stupidity in action."
                                             -- Thomas Matthews
Reply to
CBFalconer

Interesting. I was taught (back in the days of a CRT being a separate entity from the computer) the screenful rule of "thumb". But I have been working lately with Forth and am becomming a firm believer in the smaller the routine, the better. This certainly works in Forth and works well.

I used a Sudoku solver as my second significant program in Forth to learn more about the language. I spent a fair amount of time dealing with stack problems (underflow, items not removed on exiting a word and just general screwing up). Of course I knew the guidlines of keeping the routines small and testing as you go rather than writing a bunch of code and then to start testing. I learned by "proof of the pudding" that these rules make Forth coding and debug much easier. But I really learned the lesson when I tried porting this to C.

I needed a project to use for the Luminary Micro design contest and I decided the Sudoku solver would be an interesting one to port from Forth to C. I found that the C code was fairly easy to debug, part because it was largely debugged in Forth and I was closely matching. But then I had to write new code for the new user interface. This was fairly painful in comparison because it was so hard to test the routines until the code was complete to the top. Also, the routines seemed to be a lot larger (10 to 25 lines vs. 1 to 8 lines in Forth) although I have been told I just need to use more dicipline.

If C had an easy way to interactively test routines as they were written, I would say C was the clear winner (no stack to fuss with). But given the interactivity of Forth, I am pretty convinced that small routines that can be visually verified and an interactive test environment is the way to go. In either language, I would recommend a

10 line max since that makes debug easier regardless.
Reply to
rickman

... snip ...

These all derive from the functioning of the brain. It has been experimentally determined that the human brain cannot accurately keep track of more than roughly six or seven entities at once. This is known as the 'rule of seven'.

--
 
 
 
 "A man who is right every time is not likely to do very much."
                           -- Francis Crick, co-discover of DNA
 "There is nothing more amazing than stupidity in action."
                                             -- Thomas Matthews
Reply to
CBFalconer

I'm not familiar with all c implementations, but Microsoft developed a QuickC program (I still use their v2.0 of that) and it allows similar debugging to QuickBASIC, which is okay. Also, there are several c interpreters out there, like Lua. These, I believe, allow you to test routines a little more conveniently than separate compilation/link/run does.

(I don't know much about Lua, except that I've been interested to try it out at some point. As I read it, Lua is an extension language, has no notion of "main," only works embedded in a host program; the host can invoke functions to execute a piece of Lua code, can write and read Lua variables, and can register C functions to be called by Lua code. The free Lua distribution also includes a complete Lua interpreter as a separate program you can use to test routines, I think.)

Jon

Reply to
Jonathan Kirwan

Hmm. Never mind about Lua. It's not c. Just c-like. But there do seem to be other c interpreters out there that are c and not c-like, given some of the google results I've just skimmed.

Jon

Reply to
Jonathan Kirwan

John Larkin wrote:

improve.

Method 2 is an ideal to strive for, but it is not necessarily possible - it depends on the project. In some cases, you know what the program is supposed to do, you know how to do it, and you can specify, design, code and even debug the software before you have the hardware. There's no doubt that leads to the best software - the most reliable, and the most maintainable. If you are making a system where you have the time, expertise (the customer's expertise - I am taking the developer's expertise for granted here :-), and budget to support this, then that is great.

But in many cases, the customer does not know what they want until you and they have gone through several rounds of prototyping, viewing, and re-writing. As a developer, you might need a lot of trial and error getting software support for your hardware to work properly. Sometimes you can do reasonable prototyping of the software in a quick and dirty way (like a simulation on a PC) to establish what you need, just like breadboarding to test your electronics ideas, but not always. A "development" project is, as the name suggests, something that changes with time. Now, I am not suggesting that Method 1 is a good thing - just that your two methods are black and white, while reality is often somewhat grey. What do you do when a customer is asking for a control system for a new machine he is designing, but is not really sure how it should work? Maybe the mechanics are not finished - maybe they can't be finished until the software is also in place. You go through a lot of cycles of rough specification, rough design, rough coding, rough testing with the customer, and repeat as needed. Theoretically, you could then take the finished system, see what it does, write a specification based on that, and re-do the software from scratch to that specification using Method 2 above. If the machine in question is a jet engine, then that's a very good idea - if it is an automatic rose picker, then it's unlikely that the budget will stretch.

I think a factor that makes us appear to have different opinions here is the question of who is the customer. For most of my projects, we make electronics and software for a manufacturer who builds it into their system and sells it on to end users. You, I believe, specify and design your own products which you then sell to end users. From our point of view, you are your own customers. It is up to the customer (i.e., the person who knows what the product should do) to give good specifications. As a producer of high-end technical products, you might be able to give such good specifications - for many developers, their customers are their company's marketing droids or external customers, and they don't have the required experience. As a developer, you can do the best you can with the material you have - but don't promise perfection!

Programming from specifications is like walking on water - it's easy when it's frozen.

Reply to
David Brown

That's interesting to know, I had only a "feeling" based on my own experience. I (like most others, I suppose) have become used to writing just labels with some subroutines (functions, whatever) which return "to be written" error status so I can begin debugging as early as practical. I would say my limit is between 100 and 1000 lines at once (vpa or 68k or, well, 6800/09/11 assembly etc.) depending on how much of the effort is coding and how much algorithm development. The latter can actually make the 100 look unreachable sometimes. The piece of code I have been preparing in my head for weeks (without having the hardware to run it on, I had to decide whether to build the hardware based on that code feasibility and it was as marginal as it gets) was the real time loop of a TI DSP, I had to make sure things would work within 10 cycles.... they did. More basically, I would say if I don't have a binary to run after the first day of coding things don't look good :-). (I have not thrown away many days' typings because of that but not so few either).

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

Reply to
Didi

Indeed, Lua is not C. It is basically an interpreted scripting language (although you can have bytecode compilation, like Python or Java), implemented as a library to be linked into your C program. It's easy to make your own modules that expose your C functions and data to Lua. The library is quite small (especially with the patches to use integers instead of floats for all numbers) - if you have a 32-bit micro then you should have plenty of power even with just on-chip memories. Connect the Lua interpreter to a serial port interface, and you've got an extendible programmable command line interface into the heart of your program. I've done this a couple of times - I should get in the habit of doing it more.

Reply to
David Brown

"David Brown" schrieb im Newsbeitrag news:45cae1cb$0$24618$ snipped-for-privacy@news.wineasy.se...

If you want to do it in Forth with a smaller footprint than Lua's, have a look at FICL.

formatting link

Reply to
Andreas Kochenburger

... snip ...

Many moons ago I had to develop some software to handle an existing mechanical and chemical monstrosity. I rolled up a system and recorded a file with about 15 minutes of events, each time labelled. Then I went away and developed the software.

It replaced the various interrupt routines with a 'read next event' function, which then simulated the occurence of the appropriate interrupt. I developed the whole system based on this nicely repeatable input. Then I added the interrupt routines, which were all simple, and used the internal time stamping mechanism. Everything worked on first roll-out. Everything was developed top-down.

Note that I could easily bugger the input file (actually a copy) to simulate various faults.

--
 
 
 
 "A man who is right every time is not likely to do very much."
                           -- Francis Crick, co-discover of DNA
 "There is nothing more amazing than stupidity in action."
                                             -- Thomas Matthews
Reply to
CBFalconer

I recall a thread somewhere that discussed the origin of this "rule". That conversation traced it to a paper by one of the participants in the thread who said his original paper had to do with perception, not thinking. He claimed that the human senses could distinguish easily approximately seven levels of volume, tone, color and so on. That is not to say that we can only see seven colors, but if you were to attach them to objects or menus or similar, using much more than seven makes them hard to distinguish.

I'm not disagreeing that seven is a good number for things you can hold in your head short term, I am saying that I have not seen a source for this claim. I'm pretty sure the magic number for me is a bit less, perhaps five. That may be why I have trouble with the stack when I program in Forth and have to make extensive use of stack notation.

Reply to
rickman

Newsbeitragnews:45cae1cb$0$24618$ snipped-for-privacy@news.wineasy.se...

I would love to use Forth, but right now there are no Forth vendors that support the ARM Cortex-M3 chips. I took a look at using FICL on an embedded target and it would take a fair amount of work to port it to such chips. Like most open source Forths, FICL is set up to run on a PC host. It has been a few years since I looked at FICL, so if I am mistaken, please let me know. But I don't think FICL is really intended for embedded use unless you are embedding a PC like device.

Reply to
rickman

That is very optimistic. Most of people can't track even the single thing :)

VLV

Reply to
Vladimir Vassilevsky

...

Whatever happened to InstantC? I have a copy for DOS available to anyone who wants it. Just ask.

Jerry

--
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Reply to
Jerry Avins

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.