Any ARMs with hardware divide?

Hi - does anybody know of any ARMs with a built in hardware divide? I heard that a new core coming out would have a built-in hardware divide, but I've been unsuccessful in finding out which core that is and if any chips have that core. Thanks,

-Michael J. Noone

Reply to
Michael Noone
Loading thread data ...

You could be thinking of the Cortex core?, their promo stuff says this:

"The exceptional performance of the ARM Cortex-M3 processor is achieved through a highly revised architecture that also implements many new technologies in this type of core, such as hardware divide and single cycle multiply."

No, it is not binary compatible.

Anyone working on this silicon is in stealth mode right now...

-jg

Reply to
Jim Granville

Cortex defines 3 families of cores - the M3 is just the first announced in the deeply embedded/microcontroller space.

Why not? It depends on your definition of "binary compatible". A purist definition would imply no ARM chip is binary compatible with another because most chips have different peripherals, or use a different way to clean the cache etc. You'll always need to port your OS and drivers to a new CPU, and the M3 is no different in this respect.

However a commonly used definition is that it implies user mode compatibility. The M3 uses the Thumb-2 instruction set which is backwards compatible with Thumb, so existing compilers and objects will continue to work (as long as they don't contain ARM code). Of course to get the maximum benefit of the M3, you'll want to recompile with a good Thumb-2 compiler - but again this is no different from any other new core.

Wilco

Reply to
Wilco Dijkstra

You cannot compile an ARM program in pure Thumb mode. You need some ARM mode stuff as well so no ARM/Thumb compiler would work off the shelf with the Cortex. Also no operating system would work with Cortex. Very few semiconductor vendors have licensed Cortex cores You will have to invest in a new set of tools to start with the Cortex so it will have no special benefit over any proprietary architecture.

--
Best Regards,
Ulf Samuelsson
This is intended to be my personal opinion which may,
or may bot be shared by my employer Atmel Nordic AB
Reply to
Ulf Samuelsson

"Ulf Samuelsson" wrote in news:Cibee.54509$ snipped-for-privacy@nntpserver.swip.net:

Why ever not? A user land program doesn't need to have access to the CPSR! You can still SWI from Thumb, and the OS Libs could have Thumb entry points. Sure it takes some careful thought, but that thought has been around for a long time under the name "interworking".

No "compiled for ARM" OS would work for Cortex, and no OS could be compiled for Thumb because of the need for mode changees and Exception handling (inc Reset), However this is a long way from saying that an OS cannot work with Cortex, it simply needs some basic porting work. There is still value in keeping much of the toolchain, OS and apps compiled in the same way.

Will

Reply to
Will

Of course you can. Obviously code running on bare metal will need some level of porting due to the fact that exception handlers had to be in ARM preiously. However code running under an OS should not.

It also should be pointed out that this whole thread applies only to the Cortex-M series which is designed for the deeply embedded space. The Cortex-A and Cortex-R series support the ARM instruction set as well as Thumb-2.

This comment applies to almost any new processor produced by ARM ever. Most OSes are complex beasts and invariably require some level of porting to a new core.

-p

--
 "What goes up must come down, ask any system administrator"
--------------------------------------------------------------------
Reply to
paulg

So are you saying, the -A and -R variants, can safely swallow libraries created for ARM7 cores, - ie that ALL cortex extensions are in the 'opcode gaps' of older cores ?

Point to ponder: If the Cortex core is so wonderfull and efficent, why do the -A and -R versions need ARM opcode support. Sounds a bit like a reflex oops-fix ?

Will this not get very confusing in the field, since some Cortex cores are binary compatible, and some are not, and some have smaller die, and some do not... ?

ARM seem to have lost their way a bit on handling this one....

-jg

Reply to
Jim Granville

Yes - just like all previous extensions.

Despite the success of Thumb, ARM code is still used a lot in many markets (WinCE and Linux are almost exclusively ARM for example), so dropping all support for the ARM instruction set would be a bit stupid, don't you think?

The markets Cortex-M is aimed at almost exclusively use Thumb because codesize is essential there. ARM is typically only used in a few routines that cannot be written in Thumb. Thumb-2 solves this problem (most ARM code can be reassembled to Thumb-2), thus making ARM "unnecessary baggage" in a tiny micro controller. If you think about it, it all makes sense...

And how exactly is this different from today? All existing cores have different die sizes, power consumption, architectures, micro architectures, coprocessors, cache architectures etc. If you're not confused by the existing CPUs then you shouldn't be confused by Cortex.

The key is that you can always upgrade to a bigger and better core (eg from ARM7tdmi to ARM9E to ARM1156T2). You can sometimes literally use old binaries, but this will not result in the best possible performance. In some cases you'll have to do an OS port (eg. due to different cache architecture), but in all cases a recompile is highly desirable to make optimal use of the new instructions and micro architecture.

The Cortex family is very similar: there will be multiple CPUs at different performance levels within each of the A, R and M strands, and these will be binary compatible (ie. no recompile needed). You can also move between them if you wish, but this obviously requires more effort (just like moving from ARM7tdmi to ARM1156 would).

I think you're a bit confused...

Wilco

Reply to
Wilco Dijkstra

Yes you can. A program can be a Thumb-only application, a library or a DLL or similar. The OS interface doesn't change for new hardware - a basic principle of OS design!

With the ARM1156T2-S however it is possible to create a Thumb-only system, including the OS.

Yes until Thumb-2 you generally need some ARM code in your OS (but not necessarily in your application).

Operating systems need to be ported to new CPUs. What's new?

Perhaps Cortex is so new that none of the CPUs have been released yet?

No, existing tools will continue to work fine for Cortex. For Cortex-M you'll need Thumb-2 capable tools of course, but these either exist today or are close to being released by your favorite compiler vendor. Cortex-A and -R can continue to use any existing tools of course.

Wilco

Reply to
Wilco Dijkstra

Don't these two claims conflict a little ? In one you state dropping ARM would be stupid, then in the next, you call it unnecessary baggage ?

As to the debate of usefullness of ARM code on uC, you could look at the very recent posts in c.a.e, by messers Pelc and Schwob "Re: CPU selection", where they show that the Philips 128 bit FLASH fetch has moved the goalposts a bit on this one.

I did note that ARMs 'benchmarks' to justify the Cortex, focus on narrow bus systems, but there ARE very small uC shipping, with wide busses...

cases

all

instructions

Err What ?! [Who is confused here ?]

Earlier in this thread, you stated " ... so existing compilers and objects will continue to work (as long as they don't contain ARM code). "

So, exactly what DOES happen when a Cortex M3 encounters an ARM (not thumb) opcode ?

If it chokes, it is not binary compatible. Very simple.

I'd agree that the _situation_ is confusing....

It would probably help if ARM used a different family name for cores that are NOT binary compatible.

I can understand marketing want 'a bob each way', but there is no grey area in "binary compatible".

-jg

Reply to
Jim Granville

Doesn't Thumb-2 just add this to Thumb what is needed to do these things.

I can't speak for others, but I am pretty sure, I'll have our RTOS ported to a new core within a week or less (and this is pure assembly).

AFAIK Thumb-2 is just an extension to Thumb, or am I wrong. So the new codes could be easily hand-coded for the beginning

--
42Bastian
Do not email to bastian42@yahoo.com, it's a spam-only account :-)
Use @monlynx.de instead !
Reply to
42Bastian Schick

The drive to Thumb is for code size. This is important in PDAs and mobile phones. For systems which require good interrupt response, ARM code is needed. Thumb-2 attempts to resolve the conflict. For our compilers, Thumb-2 appears (for lack of official docs) also to resolve some issues in Thumb code generation.

I have no idea how much silicon the Philips MAM occupies. It does an excellent job of marrying slow Flash to an ARM core.

The Atmel single-chip memory appear to be optimised for Thumb code.

Stephen

-- Stephen Pelc, snipped-for-privacy@INVALID.mpeltd.demon.co.uk MicroProcessor Engineering Ltd - More Real, Less Time

133 Hill Lane, Southampton SO15 5AF, England tel: +44 (0)23 8063 1441, fax: +44 (0)23 8033 9691 web:
formatting link
- free VFX Forth downloads
Reply to
Stephen Pelc

Jim Granville wrote in news:427abd00$ snipped-for-privacy@clear.net.nz:

Ford makes cars which vary from the quite small to the very large, the reason is so that people who want a small car can buy a ford, and people who want a big car can also buy a ford. People dont find a product range confusing, they like to buy somthing that fits their purpose, and if Ford dont make it they'll move to VW.

Compare and contrast

In a wide bus uncached system the perfomance benifits are going to be very slight over stright ARM code, but the code is considerably smaller.

Picking a few examples where the system has been arched around an existing chip to acheive maximum performance is in no way proving that there isn't demand for a new chip which runs faster with less IO req and less power consumption.

Yes, it chokes. But in a world where all the code is available why not change the build attributes? Even in a world where you download apps, those apps can be thumb state, and will still work just the same.

You mean like Cortex-A Cortex-R and Cortex-M, I wish someone in ARM had thought of that (checks website) Oh, thy did.

I dissagree.

Reply to
Will

And these will be made available to existing ARM tool customers free of charge? One reason why people selected ARM was that they did not want to continue spend money on switching tools.

--
Best Regards,
Ulf Samuelsson
This is intended to be my personal opinion which may,
or may bot be shared by my employer Atmel Nordic AB
Reply to
Ulf Samuelsson

Ah, but if you look at ARM's annual reports you'll see that in recent years they've made as much money from tools sales as from chip licencing. Making chips you need a new compiler for fits neatly into that business model.

--
Kevin Bracey, Principal Software Engineer
Tematic Ltd                                   Tel: +44 (0) 1223 503464
3 Signet Court, Swann Road,                   Fax: +44 (0) 1728 727430
Cambridge, CB5 8LA, United Kingdom            WWW: http://www.tematic.com/
Reply to
Kevin Bracey

You did read their numbers ?

Of course, there is always demand for faster, lower power devices.

What is under debate, is the need to loose binary compatibility to achieve that.

I see nothing wrong with what Philips did, to improve the performance in a invisible way to the users. It is a clever way to turn ARM opcodes from a liability, into an asset. Given the direction of the process ceilings, and on-chip memory, it is a logical extension - other uC are also getting wider on-chip busses.

Wilco Dijkstra wrote:

Good, so we have established it is NOT binary compatible.

No smiley ? - A world 'where all the code is available', is unfortunately, not the REAL world. Code come from many directions, and is expected to work on many platforms.

...until one of those apps actually contains a ARM opcode and the chip chokes. "Oh, gee, it worked fine on the other ARM ? "

Will ARM give out free 'Cortex-M choke-danger detectors' ? :)

Wilco Dijkstra wrote:

These are not different family names, but suffix letters.

Philips also tried to coat-tail the XA51 onto the C51 market, and gloss over the binary incompatible, with very similar "Just recompile" and "Oh, most apps are in C anyway". They also release a C51 core variant with removed opcodes, seemed safe at the time, but it broke code that worked...

Intel's MCS251 and also the Itanium are not a glowing success.... Zilog has examples too...

Thus history, and the users, will judge just how important binary compatibility is - not marketing departments.

So far, history proves to be quite intolerant to not-binary-compatible options, that cause admin and version control grief, and force users to carefully check "Now, _which_ ARM did we use in that model ? -was it that Cortex-M?"

On which point ?

Many ideas in Cortex are very good, and fix the shortfalls in the ARM for embedded control, but I fear ARM looks to be repeating the mistakes of history, by not learning from it....

Will we find that the Cortex-M quietly gets 'de-emphasised' ?

-jg

Reply to
Jim Granville

If you use GNU yes - it's still free last time I heard. However if you want support for the latest and greatest CPUs and you want it *now* then you'll have to pay for it. In what way is this different from the introduction of Thumb-1, Thumb-2, DSP, VFP, Media or any other architectural extension?

There is a big difference between upgrading and switching tools. Upgrading is typically an order of magnitude cheaper (cost and effort wise) than switching to a new toolkit. The ARM tools business is healthy with over 20 compilers available, so you can choose whatever suits you best. Note that saving money on the cost of a toolkit is false economy in most cases - even a small improvement in the per-unit-cost/feature set of a product or programmer productivity will pay for it.

Wilco

Reply to
Wilco Dijkstra

... snip ...

Another example of fouling binary compatibility is the Rabbit, which could have easily preserved z80 binary compatibility. This immediately cost them access to a great wealth of pre-existing software.

--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
     USE worldnet address!
Reply to
CBFalconer

It's not the same situation. A processor could have been made that preserved absolute binary backwards compatability and had the new features. The problem being that it would take so many gates to implement this that the resultant core would be far too big and power hungry to sell into the deeply embedded market.

ARM has broken backwards compatability before, for example the current crop of processors do not execute 26 bit code.

-p

--
 "What goes up must come down, ask any system administrator"
--------------------------------------------------------------------
Reply to
paulg

Without specific hard numbers, this is hard to verify.

A common problem with core-focused design, is it looses the bigger picture, and the fact that the +5%..+15% ( or whatever ) of gates, is a much lower % of total die, when you add RAM, FLASH, Peripherals (which may include a DSP ) - AND, you _can_ find that those extra gates give SMALLER CODE space, so the TOTAL die size can be smaller.

The difference is often swallowed in a single die shrink, anyway.

There are also other ways, in a system design, to preserve Binary Compat - eg some CPU vendors use SW traps on the deprecated opcodes, that call emulation routines in ROM (already there for Boot load and ISD), and so the CPU does not choke on the phase-out opcodes. The Crusoe processor is an extreme example of SW Opcode emulation.

-jg

Reply to
Jim Granville

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.