2066-pin processors

Google "Rent's Rule".

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs
Loading thread data ...

Sort of interesting.

Suzana Herculano-Houzel has pointed out that primate brains scale differently to other mammalian brains.

formatting link

Essentially, primate neurones don't get bigger as the primate brain gets bigger, while they do get larger in other mammalian species. We seem to have a more economical interconnection scheme

--
Bill Sloman, Sydney
Reply to
bill.sloman

These days when somebody says they made their own computer, they mean they selected and purchased the dozen or so parts and hand assembled the beast.

--
 Thanks, 
    - Win
Reply to
Winfield Hill

Well, we started with 8-bit computing and graduated to 16-bit computing. Now we are on the tail end of 32-bit computing and moving into 64-bit computing.

Well, what the heck is wrong with 1024 or 2048 bit computing, since there are so many pins everyone is crowing about?

Reply to
Robert Baer

That is a nice architecture for doing various realtime applications, in which a result must be produced within a deadline. If it appears that the result will not be available in time or the number of signals are not produced in a given time, divide the job into multiple sequential steps and add more cores to do the steps. Typical applications are DSP and PLC. You could assign a core for each individual binary or analog (DAC) output.

Why reinvent the wheel ? Modifying some multitasking OS should be enough.

Are you suggesting that the OS should do the splitting of batch type applications (read input data, do some crunching for some time and then spit out a single set of results) and spread evenly on all cores. The OS cant't do that. If parallism is needed, it must be designed from the beginning during the design phase before coding.

Actually in the quoted section I was thinking on a more conventional design with DRAM and a (single) CPU on the same chip. Take for instance a 4 Gbit DRAM, which is usually arranged as 64K rows by 64K columns.

When the 16 bit row address selects a row, all the 64 K columns are destructively read out into 64 K sense/writeback amplifiers. In normal DRAMs, the 16 bit column address selects one (or a few) bits from the 64K sense amplifiers. All 64 K column bits are written back into the array.

However, if the CPU is on the same chip, skip the column decoder and store all 64K column bits in a 64 Kbit (8 Kbyte) latch. The CPU can then access directly all those 8 Kbytes directly as needed, doing multiple reads or writes before writing back to the array, saving time. The 8 Kbyte would be a nice virtual memory page size as well as a nice cache line size. To avoid cache tracing, a few column latches would be needed, but still it would simplify cache hierarchy (instead of L1, L2 and L3).

Making some of these 64 K column latches into shift registers, loading data to/from external world at 10 .. 100 Gbit/s would be easy,(just like old VRAMs.

Reply to
upsidedown

64-bit is mainstream for everything that is not very size, space, power or cost constrained. We have long since moved into the era of 64-bit computing.

There are diminishing returns for wider data sizes. 32 bits is enough for most integer calculations - you rarely have to count beyond 2 billion. But /sometimes/ you do, such as for tracking the US national debt in microdollars, or counting the people in the world, or working with more than 4 GB memory and the size of modern disks, or counting seconds since 01.01.1970. Thus 64-bit computing is handy, but the normal data size on 64-bit systems is still kept at 32-bit.

64-bit numbers will let us count the population of the mankind until it is over a million times as much as it is now - that seems fine for the immediate future. It will track seconds for long past the lifetime of our planet, or nanosecond timestamps for then next 300 years. We do not expect to have a million TB of ram in computers in the foreseeable future. 128-bit numbers do have their uses - 64-bit is not enough to count the total storage of Google.

There are always applications for some calculations on wider numbers - cryptography is a prime example. But you don't need general use data sizes that big - it does not make sense to make computers that wide for their general purpose data. (Vector and SIMD units can be very wide - but that is for multiple data in parallel, not single huge numbers.)

The RISC-V architecture supports 128-bit versions, but I don't know if anyone has actually made one in practice.

Memory buses can be wider than the data for cpus. But there are many limiting factors if the buses have significant length - often it makes more sense to have multiple thin buses at higher speeds, than a few wide buses than need lower speeds due to differences in the lines. You usually only see very wide buses within chips, rather than between them.

Reply to
David Brown

Does that count? <

formatting link
>

Cheers, Gerhard

Reply to
Gerhard Hoffmann

David Brown wrote

LOL Now I see it, trump will make larger integer size in banking illegal, and that way US debt will overflow and become zero again.

Reply to
<698839253X6D445TD

Jan has an unrealistic idea of how much Trump knows.

--
Bill Sloman, Sydney
Reply to
bill.sloman

IIUC a 64 bit CPU core can compute on 2x 32 bit numbers at once. Some of the time anyway.

I reckon the way forward is mixed cores, so that simple calculations/tasks can be carried out by simpler cores that can do the job while switching less transistors & using less silicon real estate for the job.

NT

Reply to
tabbypurr

That depends on the core, and the operation - many cpus can do some kinds of SIMD instruction, regardless of the width of the cpu.

There is no clear definition of what it means for a cpu to be "n-bit". Generally, it is taken to mean the width of the general purpose registers and basic arithmetic/logic instructions. So SIMD instructions don't affect the cpu "bit width". But it can get a bit fuzzy if the cpu can tie registers together in pairs for some instructions.

That is sometimes done. AMD has made processors where there are lots of cores, but fewer floating point units - these are shared between cores. Then there are ARM big/little devices. Such architectures have their pros and cons - there are always trade-offs.

Reply to
David Brown

Very Long Instruction Word (VLIW) processors blur it even more.

Yeah, Bulldozer and Piledriver. That was a debacle. I have three dual

8- or 12-core Magny Cours boxes that I'm not getting rid of any time soon. Lots of stooch, VT-X, IOMMU, no ME equivalent, legacy BIOS. Good medicine, if a bit power-hungry.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

Phil Hobbs wrote

I designed a vector processor card for in the early IBM PC when was it 1987? It was for a large project, a video like display.

For cryptology I have used 32 bits 486 registers to do 32 one bit decoding operations at the same time. That was quite common in the early 2000.

So register width... for what it is worth.

Reply to
<698839253X6D445TD

I had an 8514/A 1024 x 768 display on a PS/2 Model 80 that year. It was magic. The XGA that replaced it was much faster.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

You sound like NPR. If you turn on the local radio station, the mean time before hearing the word "trump" is literally about 10 seconds.

--

John Larkin         Highland Technology, Inc 

lunatic fringe electronics
Reply to
John Larkin

DRAM and logic processes aren't generally compatible, so you'd have to use static memory, which takes four times the number of transistors.

Die size is generally limited to about 22 mm square for thermal reasons, and has been for decades. Even with really advanced base-level metal (BLM) and super-hard epoxy underfill for stress relief, you start tearing the solder balls off the corners of the chip. Thus there's a tradeoff between cache size and logic performance. Designers have been working all those tradeoffs since forever--there's a huge amount of performance simulation that goes into processor design. An old colleague of mine, Mike Ignatowski, used to be the performance sim guru at IBM Poughkeepsie.

Chip stacking is old technology--the chips get stacked up in echelon and connected with a rat's nest of bond wires. The problems are cooling and (especially) cost. Real 3-D is much more complicated, and tall stacks tend to suffer from the elevator shaft problem--eventually vias take up the whole die area.

There's also the cooling issue--with stacked processor and memory, the processor has to go on top of the stack (close to the heat spreader) because it does most of the power dissipation. That means that all of its I/O has to go by through-silicon vias (TSVs) drilled through all of the memory chips. Besides the elevator shaft problem, it's hard to maintain a controlled impedance in TSVs, so there are potential signal integrity issues if the stack gets too thick.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

Stop multitasking and assign a CPU to every process and every i/o operation, amd let one master CPU manage it all. Have absolute hardware protections for memory and peripherial access.

No more viruses. No more crashes.

Hardware is cheap, and hardly anyone needs more actual compute power, for twitter or facebook or s.e.d.

If it appears

Parallelism is good for crunching big math problems, but few people do that. The closest I get is running Spice.

We need simplicity, reliability, and security. We won't get that by applying more compute power to Windows or Linux.

We're still in the dark ages of computing. We should let engineers design computer systems, not programmers.

--

John Larkin         Highland Technology, Inc 

lunatic fringe electronics
Reply to
John Larkin

We just buy a couple dozen identical Dells, OS installed, every 5 years or so.

--

John Larkin         Highland Technology, Inc 

lunatic fringe electronics
Reply to
John Larkin

John Larkin wrote

-------------------------------^^^^^

Out of chocolate?

I had some nice herrings today. coffee with cream White chocolate

mmm

Reply to
Jan Panteltje

It has been done. It can work on some kinds of specialised systems. (Read about XMOS devices.)

No more flexibility. No more compatibility. No more efficiency.

I usually have between 300 and 500 processes running on my desktops. Should I have a processor dedicated to each of them?

I understand what you are saying, but there is a happy medium to be found here.

And yet you are arguing for massively parallel systems with hundreds of cores?

Engineers /do/ design computer systems.

Reply to
David Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.