Querry code density About ARM7 (Newbie)

Dear All

I am reading ARM7TDMI processor. Those who are familiar with ARM7TDMI processor are aware of the "Thumb" instruction set. The native ARM instruction set is 32 bit, but the latest generation such as ARM7TDMI has a mode bit which you can flip to execute a 16-bit

subset of the instruction set. Both instruction sets operate on 32-bit

data, but it's claimed that the Thumb instruction set has greater code

density.

What is Code Density mean here ?? As it has been specifically mentioned that "Thumb Instruction Set", has the higher performace then 16 bit and more code density as to 32 bit processor.

what is the role of code denstiy ? Please let me know in brief I will be thankful if any one can give pointers on this.

Thank In advance Regards Ranjeet

Does anyone else think

Reply to
ranjeet.gupta
Loading thread data ...

Code density is a measure how much code can be squeezed into a given block of memory.

The Thumb instruction set is a subset of full ARM instructions, operating on the same registers as the ARM instruction set. Due to the shorter instruction length (16 bits instead of 32 bits), only the first 8 registers (r0 to r7) are available as general registers. The other 8 registers (r8 to r15) can be accessed in a limited way in the Thumb mode. Also, conditional instruction execution is limited compared to full ARM mode.

Exception handling (e.g. interrupts) and control is not possible in the base Thumb instruction set: the processor has to switch into 32 bit ARM mode for the control register accesses.

The Thumb mode bit is changed with a special branch instruction (BX register), where the target address includes a mode bit for the branch destination.

In practice, a Thumb program is about 60% in code length compared to the equivalent ARM program. If there is a full-width bus to the code memory and the memory is fast enough to follow the processor, Thumb mode execution is about 20% slower than ARM code, due to the additional instructions to compensate for the more limited instruction set.

For more information, get the ARM documentetion.

The reference is the ARM Architecture Manual (called ARM ARM).

HTH

--

Tauno Voipio
tauno voipio (at) iki fi
Reply to
Tauno Voipio

As it has been said that the code density of the thumb instruction set is 60% higher then the ARM 32 bit Instruction set, So When the memory is constrained then does it sounds logical to go for the thumb instruction set as it is 60% more then the 32 bit ARM Instruction.

But Taking it other way round that the code density of the thumb instruction set is 60% Lower then the ARM 32 bit Instruction set, then every things makes the sense correctly as what is said in the Specs and what you said.

as it is not specifically mentioned in the specs.. It has been just said that code density of Thumb is 60% then of 32 bit ARM.

Sorry If i wrongly understood some thing, and Asked the dump question, Please do guide me.

Yes I do agree on what you said above, while i am goignt through the ARM7TDMI rev (3) Introduction Chapter. Now I have read this chapter but as you and chapter aslo state that Thumb instruction set a subset of the full ARM Inctruction, So why the number of the Instruction set for the thumb is greater then the ARM 32 bit??

What I find in the ARM7TDMI rev (3), that In the thumb instruction set we have one instruction named SWI 8bit_Inm (software Interrupt) then this stands for, As above you said that it is not possible in the base thumb instruction set, Is there diffrence between the base thumb instructuion set and the thumb instruction set. ? Is some thing I am missing

Thanks Tauno voipio For your comments and explanation. Regards Ranjeet

Reply to
ranjeet.gupta

Correct - you need less memory with Thumb, which is a key reason why Thumb is so popular.

No, Thumb is more dense than ARM. Thumb instructions are half the size of ARM, so twice as dense. You need more instructions than ARM though, so you lose some of the gain. But overall, Thumb codesize is around 65% of that of ARM code, which is a 54% code density gain.

No, Thumb code *size* is 65% of that of ARM, code *density* is

54% better than ARM (1/0.65 = ~1.54).

Code size and code density are not the same. Code density is the inverse of codesize, ie. lower code size means higher code density and visa versa. If X is half the size of Y, this means 50% less code, so in other words X is twice as dense as Y, or 100% better code density. Conversely, Y uses twice as much code than X (100% worse codesize), or 50% less code density than X. So it matters a lot whether you take X or Y as the baseline. It's a little confusing at first, but hopefully it makes sense after a while :-)

I take it you mean "why does Thumb need more instructions than ARM?".

Thumb being a subset of ARM means that a Thumb instruction is simply not as powerful as an ARM one. One can sometimes encode 4 different operations into a 32-bitARM instruction, but you can encode only one operation in each 16-bit Thumb instruction. Thumb doesn't support ARM features like conditional execution, combined shift&operate instructions, use of 16 registers, 3 operand instructions, 8-bit rotated literals, etc. Therefore you typically need more instructions in Thumb. An example:

int f(int x, int y) { return x + (y > Exception handling (e.g. interrupts) and control is not possible

For ARM7tdmi they are the same. Yes, SWI is in Thumb (note it is nowadays called "SuperVisor Call" as it is not an interrupt).

There is no single Thumb instructionset - Thumb has been evolving over many years, and today 5 different versions exist. The most interesting variant is Thumb-2, which adds all ARM features to the Thumb instruction set thereby avoiding the need for extra instructions. This makes Thumb-2 code as fast as ARM while it is still as dense as Thumb.

Wilco

Reply to
Wilco Dijkstra

:-)

Reply to
Ishvar

snipped-for-privacy@gmail.com escreveu:

A question that just arrived. Given the above mentioned code and speed reduction, I would also imagine that, for code being executed from external memory, the lower number os memory access cycles needed would make thumb code also execute faster. Has anyone tried it and has some results?

Ricardo

Reply to
Ricardo

Yes, Thumb code runs faster than ARM when you use slow memory, eg. a

16-bit wide bus like various ARM MCUs. ARM performance almost halves in this case, so Thumb code would run 30 - 50% faster than ARM. This is the other key reason why Thumb was created.

This effect also happens to a lesser extent with multiple wait-state memory, or when many cache misses occur in a cached system (after context switch). However this will become an issue only for pin-limited MCUs with few on-chip resources: MCUs are starting to use wide on-chip flash interfaces, allowing ARM code to be run directly from flash without a huge penalty. Other MCUs can run critical ARM code from on-chip SRAM for best performance.

Wilco

Reply to
Wilco Dijkstra

Isn't there at least one implementation where Thumb required an extra pipeline stage to translate the code into ARM instructions? Depending on how branchy the code is, that may actually slow things down.

S
--
Stephen Sprunk      "Those people who think they know everything
CCIE #3723         are a great annoyance to those of us who do."
K5SSS                                             --Isaac Asimov
Reply to
Stephen Sprunk

The example is an illustration of the power of the ARM instruction set when the problem happens to fit the solution. However, it's easy to see how the Thumb instruction set can be produced in that four bits of the ARM instruction is (mostly) for conditional execution, a feature that's rarely used. Another twelve bits are generally used for two source registers and one destination register designation; most of the time, only two of these are needed. Then there are varying numbers of bits (4 to 12) used for the "shifter" operand; shifting of one of the operands isn't used that often. Therefore, the ARM 32-bit instructions can be reduced to the 16-bit Thumb instructions without much loss of power.

Now, if "they" would just add support for position- independent (PC-relative) programming (LEA, BSR, et al)...

Reply to
Everett M. Greene

...

It's just an example out of many - C idioms fit very well onto the ARM instructionset.

Conditional execution is used quite a lot actually in compiled code, and even more so in assembler. Compilers can use conditional execution in cases where you wouldn't expect it. Using 4 bits is a lot however, looking at the frequencies it appears using just 2 bits would have been more appropriate.

Shifts are used quite a lot actually, for example in constant multiplies, array indexing, address generation, bit manipulation and even constant generation. Again, one could probably save a few bits by only supporting the most frequently used shifts - indexed loads for example typically use a left shift of 0..3 bits (Thumb-2 does this).

16 bits isn't much to encode a full instructionset in, so Thumb is a compromise. You can still do everything ARM can do, but it often takes 2 or more instructions. If you need more instructions you need more registers to store intermediate results, but Thumb can only access to 8 registers. Most instructions are 2- rather than 3-operand, so you may need to save the original value as well. It is quite a challenge for a compiler to produce good Thumb code.

This stuff has been supported since day one - everything on ARM is PC relative (branches, calls, literal loads etc): the PC is one of the general purpose registers.

Wilco

Reply to
Wilco Dijkstra

To my knowledge this has never been the case. In ARM7T implementations of thumb, the Thumb instructions are translated into ARM instructions during the first half of the decode stage. The translated instructions are then decoded by the existing ARM decoder in the second half of the cycle. This does not introduce any extra stages or delays and is completely neutral from a pipeline throughput point of view.

ARM9T (and later) implementations incorporate two decoders, one for ARM and one for Thumb so there is no need for the translation operation. This allows the decode stage to be shorter in terms of time and contributes to the higher potential clock speed of these implementations.

Chris (posting as an individual)

Reply to
chris.shore

... in ARMs own designs. XScale is different.

Yes, all modern ARMs do it this way. XScale is the only exception as it uses an extra decode stage to deal with Thumb instructions. In some sense this is similar to what high-end ARMs do, as they split the ARM and Thumb decoders over 2 stages. You only pay for the extra pipeline stage on a branch mispredict, so it works fine if you have a decent branch predictor.

However the 667 MHz Samsung ARM10 proves you can take an old ARM design and push it hard (it has a simple 6-stage pipe with single cycle ARM/Thumb decode stage).

Wilco

Reply to
Wilco Dijkstra

I can find that proccessor anywhere. Does it really exists or is it a propaganda announcement like their 1.2GHz ARM9?

André

Reply to
Andre

Some years ago they announced "Halla" which was to be a 1.2GHz ARM1020E, but I don't think it has made silicon yet. More recently they announced the S3C2440 which is a 533MHz ARM920T, which they sampled, but I don't know if it is in production at that speed.

John

--
John Penton, posting as an individual unless specifically indicated
otherwise.
Reply to
John Penton

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.