Any ARMs with hardware divide?

There is a nebulous complexity border, below which it is not worth while to implement any such (or sufficiently complex) emulations. If one has a machine with 100 storage bytes and 1000 opcode storage, it is hardly worth while emulating another opcode at all. In addition, some things are just not emulatable - for example the XTHL instruction in the 8080 and Z80, which exchanges a register with the top of stack. This can be used to manipulate the stack with absolutely no loss of register information, and I found to be very handy. It simply cannot be emulated, because any such requires a temporary storage location, whose content is lost. This meant I could not safely port my 8080 assembly code to the 8086 family and use it in interrupt service, etc.

--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
     USE worldnet address!
Reply to
CBFalconer
Loading thread data ...

Yes, the core is an ever shrinking part of the total die size, however the point is that a few thousand gates are equivalent to a few KBytes of flash/SRAM (which is much denser). On the M3 the gates have indeed been spent to improve codesize and performance. Adding ARM doesn't make any sense: performance is worse, codesize is worse, the programming model is more complex etc. Just removing some of the exception registers saves thousands of transistors - registers, especially if multiported, are quite expensive.

Yes, but MCUs are typically made on older processes. Even 0.18u would be quite modern.

No CPU I've ever heard of "chokes" - they will always trap so that you can emulate if needed. The Linux kernel traps unaligned memory accesses and emulates the instruction to get the desired behaviour (as if you were running it on an v6 core which supports unaligned access). The VFP uses an emulators to get full IEEE support (the hardware traps on difficult operatiosn). This is standard stuff on just about all modern architectures.

Wilco

Reply to
Wilco Dijkstra

It will be interesting to see what the Chip Vendors themselves consider makes more sense, as devices eventually appear for the merchant market using Coretx.

eg If I were and Atmel/Philips/ST etc, imagine these pitches:

a) With this model, you get 10K (or whatever) more FLASH, but you loose binary compatibiliy, and you will need new tools, and full code requalify, and need to carefully separate your different ARM flows. But don't worry, everyone has complete source code control, and the new tools will be fully bug-aligned with your existing ones.... [Yeah, right]

b) With this model, using a 'better Cortex' core[A,R,new spin?], you get slightly less FLASH, but you can use existing tools, existing qualified 'non bios' library code is fine, and you can migrate key performance areas, and new designs, to better tools as you see fit. You can also very quickly test and evaluate and compare, using existing tools and code.

If you want to take the performance gain purely from our new XYX process, that's fine by us too. It is rare to use 100% of peripherals on these uC, so designers aleady understand the benefits of standardisised/higher volume devices.

Small Test: Which pitch is AMD ( and now Intel) using very successfully with their move from 32 bit to 64 bit cores ?

What new spin could ARM apply : Well, they can re-define the -M3/deeply embeeded as meaning for very high volume ROM ASICs (etc), and excluding FLASH General purpose Microcontrollers, and create a new -F3, as a Cortex optimised for Embedded Flash uC, and wide, portable software applications. Then the IC vendors, and their customers, can decide for themselves which feature trade off is more important ?

-jg

Reply to
Jim Granville

Yup, the 80C51 has an XCH opcode too, very nice for that lowest-level stuff. Found in the better microcontrollers ... :)

-jg

Reply to
Jim Granville

Many customers select the ARM because it is a de-facto standard. Cortex isn't, so the value of the Cortex core is significantly lower than that of an ARM even if it is superior. There is no inherent benefit in selecting a non-standard core, just because it is provided by ARM.

Cortex will only be valuable if major customers sees a benefit in the extra Cortex functionality but those customers would then probably consider everything else on the market as well.

Why would anyone want to put a Cortex core in a cellular phone/PDA if that means that the applications already developed risk to break.

--
Best Regards,
Ulf Samuelsson
This is intended to be my personal opinion which may,
or may bot be shared by my employer Atmel Nordic AB
Reply to
Ulf Samuelsson

Fully agree. but some Cortex's are compatible, and some are not.

So, imagine a hypothetical one, done properly by ARM, so that it has all the nice features of Thumb2, BUT also operates (choke free) on any ARM opcodes that may arrive (but at reduced speed is fine, especially if that saves silicon).

[ ie new design weighting applied, so as to be smaller than the -A, -R variants, but not as broken as the -M variant. ]

Surely THAT would have to interest the ARM_uC vendors like Atmel, Philips, ST, AnalogDevices, STm etc ? Once one of them had it, the others are rather forced to play catch-up ? ie then the Value of the Cortex_uCFIX core becomes higher, and the older ARM7s are the ones significantly lower value....

We will see how this plays out over the next 18 months..

-jg

Reply to
Jim Granville

No one would put an Cortex M3 in a phone or PDA. It is not the market it is aimed at. They would most probably use a Cortex-A series core.

-p

--
 "What goes up must come down, ask any system administrator"
--------------------------------------------------------------------
Reply to
paulg

Do you have a reference? The only benchmark source I can find on Cortex-M3 is the comparison with other MCUs (1MByte):

formatting link

The reason I find this statement surprising is that in fact Thumb-2 works best on wider interfaces (>= 32-bits), as it uses 32-bit instructions. It is faster than ARM when using the same flash interface since it fetches less code.

Yes, it looks promising, but without further details it is difficult to figure out why those numbers look suspiciously good. It's obvious that a prefetch buffer can hide the fetch latency in straight-line code. However typical code branches a lot and the latency of non-sequential accesses can only be hidden by a cache. Maybe that is what it does...

Note a wide interface will not only speedup ARM, but also Thumb-2.

I said that *within* families CPUs are 100% compatible. The paragraphs are consistent. To clarify with a detailed example:

Suppose we have 2 different Cortex-M cores: M3 and M4. These are fully binary compatible in that you should be able to run an M3 binary on the M4 and visa versa [1][2]. If newer versions provide the performance and features you want then you'll never want to move to another Cortex family (ie. binary compatibility is a non-issue).

However say we also have an R5 core. You should be able to run M3 and M4 binaries on the R5 [1]. However you will need to do some more porting and recompilation to get the best out of the new CPU [3]. The same is true today when you move from an ARM7 to an ARM11.

Alternatively you can also run R5 binaries that have been compiled with downwards compatibility in mind (ie. no ARM code, no R5-specific features etc) on the M3 and M4. Doing this requires a bit of care of course, but no more than you need today for code that is designed to run on many architectures (eg. C libraries).

So moving *within* a Cortex family is generally trivial - you'll get full binary compatibility. Moving *between* Cortex families may require some porting and care to get full binary compatibility. In all cases a recompilation is highly desirable as the compiler can then optimise for that particular CPU.

So... where do you want to migrate to today? (tm)

[1] Of course this level of compatibility only applies to the instruction set - most MCUs have lots of peripherals which cause another level of incompatibility. For example any code that runs on the AT91 series can't run on the LPC2000 series (or visa versa). Even with identical interfaces one chip may have 2 timers and another 8. So if you use a purist definition of "binary compatible" no 2 chips are compatible. [2] Of course while your M3 code runs fine on newer versions, the pipeline may be a little different, and so your code doesn't run as fast as it could (unless you recompile it - you may not care, but your competitor might). [3] You'll end up running with the caches disabled as the M3 doesn't have a cache and thus has no code to enable it. So you're not getting full use of the new features - and the difference between potential and actual performance is likely much larger than [2].

We already knew that the M3 does not run ARM code natively, however it does run existing Thumb and Thumb-2 code, so it is binary compatible with that. So you could port your OS to the M3 (which is something you would have to do even if the M3 supported ARM), then relink your existing Thumb objects/libraries. If you did have any ARM objects without source you could disassemble them and reassemble for Thumb-2 without too much effort. Not 100% compatible, but close enough.

Also a key goal of the M3 is to aid migration of non-ARM 8/16-bit MCU to the ARM world. The ARM world is totally incompatible of course, but if the gain is worth more than the cost, people will move. The M3 tries to lower the entry barrier as much as possible by removing features that cause new users trouble (like ARM/Thumb interworking, the OS model), and introducing features that make things easier (Thumb-2, DIV, faster interrupts, more flash for a given die size).

I'd expect tools to automatically detect incompatibilities:

(a) when linking (automatically select compatible libs, error if incompatible) (b) when simulating/debugging an image (c) when burning an image into flash (d) when running on hardware (trap when executing an incompatible instruction)

This is basic stuff. You could even emulate unsupported instructions if you absolutely needed it.

Given that M3 outperforms the good old ARM7tdmi by such a large margin on all aspects and Cortex has Thumb-2 written all over it, what do you think may quietly get "de-emphasised"? :-)

Wilco

Reply to
Wilco Dijkstra

Yes, the ARM info is sparse, and poorly detailed, but what they have published shows Thumb2 to have LOWER peformance than ARM, but better code density. Thumb2 _does_ decrease the Step effect, between ARM//Thumb, and adds smarter embedded opcodes. They state it is a mix of 16 bit and 32 bit opcodes.

Yes, but the biggest effect is to shift the normal hit 32 bit opcode fetch encounters. It is an opcode-bandsidth, and matching that to memory bandwidth issue.

These verbal gymnastics aptly demonstrate my point that calling M3 something clearly different would have helped. When you have to underline the difference between 'within', and 'between', then perhaps a clearer name scheme would have been smarter.

Binary compatible means what it does on the 80C51. NO opcode choking. Very simple. SFR and Peripheral compatability are easier to manage.

'Close enough' for who ? ARM users will make that call, not ARM marketing.

and this seems to be the crux of problem. ARM seem to think they can replace the 8051/8 bit sector with this new variant. Instead, they have lost focus on what attracts users to ARM ( see Ulf's comments ). Atmel, Philips et al _already_ have sub $3 offerings, so there is substantial overlap into the 8/16 bit arena now. And this with an ARM/Thumb offering.

Mostly, the uC selection decisions I see made, hinge on Peripherals & FLASH/RAM, NOT the core itself. As Ulf says, they choose ARM's _because_ they are binary [opcode] compatible.

Philips seem to have a HW solution that simply and effectively reduces the ARM/Thumb step effect. Thus any "new core" benchmarks that exlude this solution, lack credibility.

It is better to talk about the better embedded opcodes/features in Cortex. - and the A and R variants _include_ ARM opcodes.

After all, code size is steadily getting both larger and cheaper, with FLASH ARMs now well clear of 8/16 bit models in FLASH resource.

Key words here are 'expect' and 'could'. We are talking about existing, proven tools and in use right now, not horizonware.

That's easy : The lack of binary [opcpode] compatibility.

Will Ulf be pushing Atmel to release a -M3 microcontroller : I doubt it!

I simply don't see the 'such a large margin on all aspects' in ARMs published information at all ?

These graphs show Thumb-2 as being LARGER than Thumb, and SLOWER than ARM ?! [but also smaller than ARM, and faster than Thumb] Their example claim of a system Size saving of a (mere) 9%, also avoids any comments on Speed. Hmmmm... ?

To me, Thumb2 is a sensible, middle ground between ARM and Thumb, ( fixes some of the older core's shortcommings ) but the removal of ARM binary compatibility on the M3, and apparent pitch into a space users are leaving void, is poorly researched.

Time will show who is right :)

-jg

Reply to
Jim Granville

Those are ARM1156T2-S benchmarks - not M3. The first Thumb-2 compiler indeed generates code that is almost 1% larger than Thumb (still 34% smaller than ARM!). The difference in performance is less than 3% on the ARM1156 (the first Thumb-2 CPU). So it is pretty close to the marketing statement "ARM performance at Thumb codesize". The next compiler release will without a doubt improve upon this and close the gap if not bridge it.

I'd say Cortex-M and Cortex-R are clearly different names. They are Cortex because they all support the same base instruction set (Thumb-2).

Yes, these are nice parts, but they don't compete with many of the cheap MCUs like the 8051. The M3 can compete much better and maybe get down to the $1 price range.

Yes that is true.

Anyone moving to ARM from the 8051 or similar simply won't care whether the M3 supports the ARM instruction set or not as long as it doesn't make porting harder. The resulting code is of course binary compatible with any other Cortex CPU as I explained.

There are no benchmarks on narrow flash for the M3 AFAIK. When running Thumb-2 on a wide flash it will run faster than ARM because of its smaller codesize. If the performance penalty of running from flash is 15% for ARM, it would be 10% for Thumb-2. If we use the current figure of Thumb-2 being

3% slower than ARM using perfect memory, it would be 2% faster on flash.

...

ARM's tools have had this feature for over 5 years now (since ADS): any potential incompatibilities are immediately fed back to the user. It is not incompatibilities themselves that cause the trouble, it is the wasted hours due to trivial mistakes that aren't spotted by tools that are the real issue. Loading a big endian image on a CPU configured for little endian is something I've done many times, but it never took me more than a second to correct the mistake as the debugger simply refused to run the image...

Or rather your perceived lack thereof. I don't understand how the lack of ARM instruction set support can be crucial while differences in peripherals are somehow excluded from binary compatibility issues... In the real world both stop you from running the same binary on different cores.

You're looking at the wrong information. On an ARM7tdmi with perfect memory, Thumb gives about 0.74 MIPS/Mhz, ARM does 0.9. The M3 gives

1.2 - about as fast as ARM code running on an ARM9. That's about 60% performance improvement over the 7tdmi using Thumb (at Thumb codesize) or 30% when using ARM (with a 35% codesize gain).

Then there is the power consumption and die size which are less than half that of the ARM7tdmi, the much better interrupt latency and multiply/division performance, unaligned access, simplified OS model etc.

You mean the gatecount here? The saving over ARM7tdmi with the same set of peripherals is about 37K gates (70K - 33K). Assuming a gate is equivalent to 16 bits of flash (probably too conservative), that is an extra 74KBytes of flash for free. You'd need 820KBytes of flash before this becomes a mere 9% saving, and that is definitely not a low-end MCU. You could build an M3 with

1K SRAM and 16KBytes of flash and _still_ be smaller than a bare ARM7tdmi!

Thumb-2 is not "middle" ground - it combines the best features of ARM with the best features of Thumb, effectively superceding both. Why do you think Cortex is based around Thumb-2?

Sure - I bet there are many people working hard to try prove you wrong :-)

Wilco

Reply to
Wilco Dijkstra

?! - but the M3 is Thumb-2, and you have just confirmed "not quite ARM performance yet..."

Not the users I talk with. Binary compatible is near the top of their lists, _especially_

80C51 users.

With Cortex-M, as Ulf says, they may as well also look at the raft of other 'new core' alternatives. Like CyanTech, MAXQ, & the many new Flash DSP's ..... Gamble: Choose which ones will not hit critical mass, and survive only one generation.

Well, we'll agree to differ on our definition of Binary compatible.

Could one write code that ran fine on a Cortex-R, but choked a Cortex-M ?

I call that NOT binary [opcode] compatible.

Other users are free to apply their own definitions.

To help you with that distinction, I stated binary [opcode] compatibility.

80C51 designers are fully versed in peripheral porting, but they also expect [even demand?] to have one stable/proven/mature tool chain.

I was looking at ARMs own web data, on Thumb-2.

If that is wrong, then we'll wait for it to be corrected.

Your own numbers above agree that Cortex is struggling to match ARM performance on Speed -[ real soon now... just need another compiler pass...]

No Code Size. They somehow 'missed' mention of the speed numbers ?

The more important comparison is -M and -R, -A gate counts, then you compare 'same design generation'.

Better still, give us the incremental cost of adding size-optimised ARM compatible execution to M3 [ie: can be a little slower, NO-choke is the design brief] ?

Summary: Thumb-2 has performance merits, but the -M variant risks 'falling between two stools' - instead of building on their strengths, they seem to be trying to be all things to all users. That's a pity, as the talent and resource could be better applied.

Probably time to end this thread, and wait 18 months for the users to vote.. :)

-jg

Reply to
Jim Granville

Not sure if you ever got an actual answer to your question. The Philips/NXP LPC3180 is based on the ARM926EJ-S, and has a vector floating point co-processor (not in the core, but at least it's on the same chip...)

"This CPU coprocessor provides full support for single-precision and double-precision add, subtract, multiply, divide, and multiply- accumulate operations at CPU clock speeds. It is compliant with the IEEE

754 standard, and enables advanced Motor control and DSP applications. The VFP has three separate pipelines for floating-point MAC operations, divide or square root operations, and load/store operations. These pipelines can operate in parallel and can complete execution out of order. All single-precision instructions, except divide and square root, take one cycle and double-precision multiply and multiply-accumulate instructions take two cycles. The VFP also provides format conversions between floating-point and integer word formats."

--Gene

Reply to
Gene S. Berkowitz

The answer is that ARMv7-M and ARMv7-R architecture processors have hardware divide. The Cortex M3 implements ARMv7-M and the Cortex R4 implements ARMv7-R.

-p

--
"Unix is user friendly, it's just picky about who its friends are."
 - Anonymous
--------------------------------------------------------------------
Reply to
Paul Gotch

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.