Is microprocessor an integrated circuit???

IEEE standard dictionary of electrical & electronics terms defines microprocessor as:

"An integrated circuit that contains the logic elements for manipulating data and for making decisions"

Cheers Terry

Reply to
Terry Given
Loading thread data ...

like the 8085 (SID/SOD)

As opposed to say NS32000 series CISC processors, which have string compare instructions.

processing

Cheers Terry

Reply to
Terry Given

My old pal the IEEE standard dictionary of electrical and electronics terms (6th ed) has 21 definitions for byte. Seven of them refer to

8-bits as the "common" size, but mention things like parity, ECC, etc. defs 16 & 20 talk about a C byte thusly:

"a byte is composed of a contiguous sequencesequence of bits, the number of which is implementation defined"

POSIX.byte_size is a dead giveaway really :)

Cheers Terry

Reply to
Terry Given

Do people of Germanic descent carry purses? I thought that was only scotsmen (oops, its not a purse, its an external scrotum. silly me).

Cheers Terry

Reply to
Terry Given

A closer analogy would be

Add to B, A.

Intel couldn't even get the bytes in the right order.

John

Reply to
John Larkin

My 1976 "encyclopedia of computer science" has this definition (p.817): " 'Byte' is the usual term where the machines addressable storage segment is designed to hold one alphanumeric character, and is hence 6,

7, or 8 bits long"

and on p.1356 has: "the term 'byte' is used in reference to a bit string which is of the size corresponding to the symbol representation in a particular system. Thus, there are computers with 6-bit bytes, but today one expects 8-bit bytes"

QED

Cheers Terry

Reply to
Terry Given

ROTFLMAO!

Cheers Terry

Reply to
Terry Given

Clearly not.

Cheers Terry

Reply to
Terry Given

Why, philosophically? As long as the memory address is not in code space, wouldn't it be just be an issue of cache design? A write-through cache would update the cached operand during each iteration. So where's the advantage of fetching the operands into a register with one instruction, iterating an algorithm on the registers with another, and then storing the result with a third, over simply letting the cache take care of it with one instruction?

Disclaimer: I'm on my second Bloody Mary(;-)

--
Thaas
Reply to
Thaas

The notation came _long_ before Intel! "A = B + C" has been around quite some time.

--
  Keith
Reply to
keith

Not!

There is *nothing* about a memory access that is free. ...even if the datum is in the DCache. The request *still* has to be pushed from the execution unit to the load/store unit to be processed. The whole idea of RISC is to get the memory system (including the LSU and caches) out of the critical path.

Nonsense (speaking of parochial...). Registers are renamed and take

*zero* cycles to update. Even the D-Cache takes tenish cycles to write after the execution unit decides it's done. Two trips to the LSU complicate matters immensly.

....A couple of crappy Bud Lites, but that's Friday nights in the middle of nowhere.

--
  Keith
Reply to
keith

Ok...

The notation was still around long before Intel. ...even makes sense. ;-)

--
  Keith
Reply to
keith

Don't be parochial. After the first read memory accessesto the same location are free. Even without a cache, memory writes can use as few cycles as a register write. Depends an architecture of course, but consider that the address translation was probably cached on the initial fetch.

Oh yeah, I've had a couple of 24oz Bass Ales since my last post.

--
Thaas
Reply to
Thaas

Probably a poor choice of words. I apologize. However, Add Register to Memory may be a read-modify-write, but Add Memory to Register is not.

Parochialism!(:-)

Reads do inherently take two cycles because you design the memory/cache interface to accommodate the fastest writes and the reads need to test for data availability to allow for data cache miss or address translation cache miss. So yeah, in all sobriety you're right.

Bummer!

--
Thaas
Reply to
Thaas

In article , John Larkin wrote: [...]

They got it right. The bits are weighted 2^LOCATION. Some tape drive makers got it wrong, making D7 the LSB.

When writing an integer math library, the LS-MS saves a couple of instructions in the add routine, saves more in the squareroot and costs nothing in the divide routine.

When doing floating point, you want the exponent first and then more than half the time, you want the LSB of the mantissa. This leads to wanting a very non-IEEE format for the floaters.

--
--
kensmith@rahul.net   forging knowledge
Reply to
Ken Smith

In article , Thaas wrote: [...]

You are assuming that you don't just have a big fast static RAM with no cache as such. In some DSP systems, the processor doesn't ever directly address slower forms of memory.

In one system I've had experience with, there are 2 banks of static RAM and a DMA circuit. The DSP FFTs or whatever in one bank while a DMA thing shuffles out the old results and brings in the new data on the other.

I had a nice California Cab last night with my dinner. This morning has started fueled only by coffee. When the hardware stores open I'm off to buy gardening supplies.

--
--
kensmith@rahul.net   forging knowledge
Reply to
Ken Smith

Yogesh was from India.

Chances are his question was genuine.

I doubt he expected to cause such a fuss.

Graham :-)

Reply to
Pooh Bear

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.