TI MSP430

Your comments are (as far as I can tell) factually correct, but the reaction that springs to mind is "so what?". As an embedded programmer, I really do not care how a particular embedded micro compares to an old minicomputer cpu (other than for historic interest, of course - in which case it definitely is interesting). And as for possibly misleading marketing from TI - it's not exactly new or unusual!

What is much more relevant is whether the register set and addressing modes of the msp430 really are appropriate for their target applications, or whether they would have been better off with the PDP-11 arrangement. I'm far from convinced - certainly, the example you gave (PC-relative CALL) is obscure indeed, and I think the benefit of more registers well outweighs this missing feature.

One thing that is definitely missed, however, is all four addressing modes as the destination for two-operand instructions. At the very least, there should have been a hack in the MOV instruction to allow @Rn and @Rn+ modes in the destination.

As for the missing PDP-11 addressing modes, they are not such a great loss. The indirect modes are almost entirely superfluous when you have enough registers to hold pointers in registers, rather than having to have them in memory or on the stack. It's not often that pointers to pointers turn up, at least not in embedded programming. Auto-decrement modes are nice, but how often are they used in practice? *(p++) far outweighs *(--p), as long as you have a stack pointer and push/pop instructions. Perhaps it would be a useful mode for MOV, but not otherwise.

So if you want to say that the msp430 is not as close to the PDP-11 as TI marketing seems to think, then I fully agree. But if you think that's a bad thing, then I disagree.

mvh.,

David

Reply to
David Brown
Loading thread data ...

Many years ago DEC was giving away sample chips with a PDP-11 on them. I got one and started to design a small system round it, but never got round to building it.

Leon

Reply to
Leon

IIRC that was three chips :-) PMOS at 12 V ?

--
"If you want to post a followup via groups.google.com, don't use
 the broken "Reply" link at the bottom of the article.  Click on 
 "show options" at the top of the article, then click on the 
 "Reply" at the bottom of the article headers." - Keith Thompson
More details at: 
Also see
Reply to
CBFalconer

Actually, I am within the age range for AARP, but have opted not to join. However, some of this discussion is of things that were definitely before my time.

Reply to
Gary Reichlinger

No, the 6100 and its successor, the 6120, were one chip implementations of the PDP-8.

The FAQ is here,

formatting link

And a modern PDP-8 can be found here,

formatting link

Reply to
Roberto Waltman

hehe. No problem.

I did point that out but that wasn't the point, which it appears I didn't communicate well enough on the page. My fault.

I completely disagree. There is no question in my mind that the price paid was too high -- for embedded applications, I mean.

The destination side is terrible. And it leads to expanded code size as well as slower execution on normal and common tasks. I've looked at weighing various algorithms (I write a LOT of assembly code, in my practice) which I've hand-written both for the existing MSP-430 as well as a hypothetical MSP-430 which was closer in instruction design to the PDP-11 and it's almost always the case that the PDP-11 arrangement pays off in spades.

They made the wrong choice. IMHO, of course.

Well, I use the MSP-430 and like it. It's just sad to see how badly an early choice demolished what might have been. Especially, because it wouldn't have required the invention of anything new at all and also because there is ample code experience which, in my opinion, argues very well against this choice. The additional advantages of having extra registers is more than offset by the damage caused by loss of addressing modes. The reduction in memory spills doesn't pay for itself.

Jon

Reply to
Jonathan Kirwan

That's pretty much true for any uC you choose to analyse. There are always things they "could have done better", or areas where a core has been pushed into applications the original design brief did not cover. The best cores are those designed to be microcontrollers from the ground up.

Remember the MSP430 is quite old, and so the silicon yeild and design cost of 'just another mode' will have been quite high. It was more important for them to meet targets in die area and price, and ROM is one of the cheapest areas in a die design. Assembler will have dominated.

Still, if there are large savings to be made with your changes, then perhaps a Soft-CPU for FPGA would be one way to actually implement them ? - then you have a delay in Software support, as it catches up to the new features.

-jg

Reply to
Jim Granville

which

the

many

that

micro

course -

of the

would

think

as the

have

destination.

The

registers to

the

embedded

and

for

an

least

Of

"14-bit

The PIC is a completely different creature. Its Harvard architecture makes the choice of an arbitrary instruction width (12/14/16 bits) on an 8 bit micro possible and quite normal.

The PIC gets a lot of heat in these arguments but it is effective, simple, tough, inexpensive, well represented, consistent in supply and widely available, making it a very good choice for embedded systems. Yes, coding it can be ugly, but it is extremely successful and popular due to the above points. The later PIC18Fxxxx devices code quite well.

I have yet to hear a PIC designer say "it took me longer because of the architecture". Just never happens. Other issues have far greater influence on design success than architecture in most cases.

-Andrew M

Reply to
Andrew M

I agree entirely on the issue of the destination addressing modes (especially for MOV - it's not nearly as important for other instructions). As for the other differences, I'm not convinced (although the fact that *you* are convinced is an influence). I'd have to look at it in a lot more detail some time.

The trouble is, 16 bits is not quite enough to make a good instruction set, at least not with 16 registers. With 18 bits, you could get the best of both worlds. Of course, that leads to complications during program updates, and could cause marketing folks to lose their tenuous grip on reality (like Microchip's "14-bit microcontroller").

Reply to
David Brown

Interesting that the TMS9900 DOES have these modes in the destination of a MOV instruction!

MOV *R1+,*R2+ is very common...

Reply to
anoneds

... snip ...

Hardly obscure. This, together with PC relative jumps, is what makes object code intrinsically relocatable, and I consider it a valuable feature. Now it becomes trivial to swap code segments in and out as needed.

--
"If you want to post a followup via groups.google.com, don't use
 the broken "Reply" link at the bottom of the article.  Click on 
 "show options" at the top of the article, then click on the 
 "Reply" at the bottom of the article headers." - Keith Thompson
More details at: 
Also see
Reply to
CBFalconer

tough,

making it a

extremely

code

architecture".

most

I think a lot of it is what applications you are working on. For bit oriented control, I like the PIC. Timing loops are particularly easy since all instructions take the same time to execute. If you are doing complex calculations, handling a lot of long character strings, or table lookups then you should probably use something else.

Reply to
Gary Reichlinger

But that's the point, David. If you've taken the decision already to use a 16-bit CISC instruction and you plan on supporting dual operand instructions, the price they paid to force (shoe-horn) 16 registers into a PDP-11 style 16-bit general concept was simply too high.

I can't say what the designer of the MSP-430 actually considered in laying out the design. But looking at the result, it has some of the earmarks to me of "come hell or high water, we will have 16 registers here" and "let the resulting chips fall where they may." You can see these sadder compromises in several places. By comparison, the PDP-11 shows a carefully crafted and balanced design at each and every turn.

That said, there are some features that are _enabled_ better by having

16 "registers." One of them is the concept of a constant generator. And I think that was a good choice, once they had decided to have 16 permutations available. On the PDP-11, with only 8 registers, that choice would have been somewhat more expensive to consider and would have required more time to carefully balance, looking at real code and seeing where the benefits and costs might take them.

But even on the subject of the constant generator, the MSP-430 didn't implement that in a fashion which speaks to me of a crafted design -- but instead, as one that was patched together more as a chimera of various "ideas." You might think that using a constant generator as a destination would simply throw away the result and capture the expected side effects of the source operand. For example,

mov @sp+, #0

might be coded up using destination mode 0 with an R3 destination. One might expect that this would simply pop the stack and throw away the result. Certainly, one would expect that the autoincrement would still take place. But it doesn't, when executed. Frankly, this is an oversight -- not a matter of crafted design. And only one of many that are found on the MSP-430.

And this points up the decision to shoe-horn in 16 registers -- I think they just let the chips fall where they may and didn't do a thorough considered design. By comparison, every detail of the PDP-11 instruction screams out the care and craftsman-like attention to each and every detail.

Jon

Reply to
Jonathan Kirwan

Are you mixing up position-independent code and relocatable code? When I was writing Macintosh applications in the 1980s, you could generate position-independent code---where ALL references were PC relative--or relative to a base address that was derived at run time. Relocatable code was simply code that used some mechanism to allow a linker or loader to modify those addresses in the code that required adjustment depending on where the code was loaded. IIRC, the MAC programming conventions dedicated one processor register to point to a set of global variables---the phrase 'above A5' is floating in the misty sea of memory. ;-)

IIRC there were limits on the segment size of position-independent code probably +/- 32767 bytes away from the PC.

Mark Borgerson

Reply to
Mark Borgerson

OTOH, perhaps the designers of the first MSP430 chips faced constraints on the instruction set based not on the desirability of the instruction operation, but on their ability to implement the instruction within the die area and power limits that they planned to meet. I would guess (perhaps simplisticly) that not all instruction options require the same silicon area or interconnect complexity.

It might also be interesting to compare the number of clock cycles for some destination modes (@R3+ ) on the PDP11 or M68K with the number of clock cycles for two instructions on the MSP430.

Mark Borgerson

Reply to
Mark Borgerson

Also worth noting is that the MSP430 was designed pre-FLASH, when on-chip EPROM and RAM cost a lot and used a lot of power. Hence the larger register set to avoid regular ROM and RAM accesses

-Andrew M

Reply to
Andrew M

There are cases where I simply cannot accept the above vague excuse (under which probably any defect could be swept.) The missing pc-relative function call happens to be such an oversight and would have been very easily remedied in the design without impacting combinatorial delays and cycle time, power, or die size. They already had a pc-relative adder which already had a path back to the pc register for other reasons and enabling it for the call instruction would have been a very minor modification to the execution control section, very likely without any size or power impact -- just some more careful thinking _before_ starting on the implementation. And there are others that also make the MSP-430 design weaker and also couldn't have been chosen for die or power reasons, IMHO.

Not all do. But I'd rather we deal with specific issues and have you or others point out specific, detailed arguments about those. So, for example, take the case of the 'mov' I mentioned in an earlier post or else take the case of the lack of a pc-relative function call and make a clear case for die area or power limitations, etc., that deals with these specific cases. I'll add some more thoughts of my own, just to add fuel to the fire, from my own limited cpu design experiences and I'll then bring in other oddities on the MSP-430 that look more like ad hoc design and less like crafted engineering trade-offs.

I've no idea what you expect to gather from such a comparison. But definitely have at it, I say.

Several innovations for the PDP-11 came from earlier work at CMU -- such as the idea of greatly increasing the generality of applying general purpose registers in the 8 modes. A bigger struggle for the PDP-11 instruction design team was probably to free up opcode space. In the end, I think they did pretty good. And the places they got it wrong, seem to be the same places that the MSP-430 also failed to avoid (I'm thinking about extending the address space reach, for one example.)

For those interested in some of the history of not just the PDP-11 but of other computers, there is this site from one of those intimately involved (Gordon Bell, who was the VP of Engineering at DEC at some point):

formatting link

For Lewin, should it be the case that he even reads this and if he is still working out the details of his PDP-1, the above page also includes the 1960 and the 1961 manuals for the PDP-1 and the I/O manual for it, too.

Jon

Reply to
Jonathan Kirwan

"A5 worlds".

Among various abominations in earlier versions of the MacOS: use of Pascal strings.

Reply to
larwe

In terms of the PDP-11 (did it count as a mini or a mainframe?), making code position independent and/or relocatable is important, and greatly adds to the flexibility of the architecture. In terms of a small embedded processor, running a single statically linked program with no paging or other virtual memory arrangements, relocatable or position independent code is irrelevant, and any space (in the instruction set space, or die space) used for it is wasted. Similarly, addressing modes involving double indirection might be extremely useful for implementing virtual methods in an object oriented language, but very rarely used in C or assembly, which are the typical languages of choice on the msp430.

I'm certainly not going to argue that the msp430 ISA would have worked better for PDP-11 machines, nor that the msp430 is the ideal ISA - just that it might be better for embedded applications than the PDP-11.

Reply to
David Brown

There may be some empirical evidence to support that. IIRC, no one has implemented a single-chip PDP-11 that runs on 50milliwatts. ;-) (The PDP-11 should be a lot less, without all those ADCs, SPIs, Timers and UARTS!)

Mark Borgerson

Reply to
Mark Borgerson

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.