2066-pin processors

David Brown wrote in news:pvifha$13k$ snipped-for-privacy@dont-email.me:

After all these years (well over 20), you have always been the smartest computer science guy in the group IMO.

Check this one out...

Sorry about the string line breaks.

tinyurl:

Reply to
DLUNU
Loading thread data ...

There are 1E78 to 1E82 atoms in the observable universe, so a 256 bit register is a bit narrow, but with 512 bits i enough for calculating all the quarks.

The need for 4096 bit would assume a huge number of multiuniverses, each with N-dimensions.

Reply to
upsidedown

Sure. A compiler creates bit patterns and pokes them into a disk file. A loader puts that into a region of memory that has execution priviliges. And the OS kicks it off.

Nothing unusual there. In a sensible system, hardware enforces the properties of memory regions, usually pages. Code pages should be execute-only. Data pages should not be executable. Windows is not a sensible system.

It's unfortunate that DRAM works in bursts and doesn't random-access very well. So we need cache.

--

John Larkin         Highland Technology, Inc 

lunatic fringe electronics
Reply to
John Larkin

The 386 has some protection bits in the 32 bit segment registers. As long as the code segment doesn't overlap with any other segment, this would have protected the code quite well.

Apparently this would have broken too much (bad) legacy code, so this could not be used in early Windows versions.

With the recent hardware support for the "NX" protection bit in the page table entries, the same levels of protection is now available that could have been available 30 years earlier.

Reply to
upsidedown

You need to understand how old processors (8080) and modern (i.e. last quarter *century*) languages (Smalltalk, Java, C#) work.

The 8080 did not have instructions to output to a port known only at runtime. The workaround was for the code to determine the port while running, then construct the relevant instruction on the stack, and execute that instruction.

Smalltalk and Java both convert from an intermediate representation (bytecode) to machine instructions while executing - that's the JIT compilation. Java goes one step further and examines what the code+data is typically doing, and optimises the shit out of that in ways the compiler simply cannot - that's Hotspot.

C# does something similar (but not as effectively) during program installation.

Reply to
Tom Gardner

You seem to think there is a clear binary difference "code" and "data". There is not.

Let's take a few simple examples - is an SQL query code or data? You pass it around as data, but it is instructions that get executed. And SQL injection attacks are a popular way of breaking web sites.

What about an HTML5 game? Code or data?

A text file containing a BASIC program?

There are some places here where I agree with you - such as making the data stack non-executable, and banning directly self-modifying code.

And a clearer distinction between code and data is neither necessary nor sufficient for improving security or reliability on computers. Linux does not handle this in a significantly different way from Windows, yet Linux is massively more secure and reliable.

Whenever you think you need "absolutes", you have misunderstood the situation. Whenever you think you have a "solution" for security, or a way to make something "secure", you have have misunderstood. You can make improvements to security - I'm all in favour of that. But you can't do it completely.

And in general, the more secure you make something, the more restricted or inconvenient you make it for the normal users. Windows NT famously got a top-level security rating - as long as the computer did not have a floppy disk, CD-ROM drive, or network port. Making a computer immune to viruses and malware is /easy/ - making it immune while still allowing people to use it as they expect in a connected environment is hard.

Reply to
David Brown

David Brown wrote in news:pvll4g$mck$ snipped-for-privacy@dont-email.me:

Doesn't Intel already protect against certain data execution in their chipset directly? I think it does. ISTR something in my Xeon machine's BIOS settings for it.

Reply to
DLUNU

Which one of the little brown bits in the center is the actual processor?

Rick C.

Tesla referral code -

formatting link
Get 6 months of free supercharging

Reply to
gnuarm.deletethisbit

Someday all the peripherals and memory will be on the CPU chip and the only external interface will be a Dallas 1-wire I/O, power and signal in one pin. Processors will come in diode packages.

Rick C.

Tesla referral code +

formatting link
Get 6 months of free supercharging

Reply to
gnuarm.deletethisbit

I'm sure there is a law written someplace about how long a new OS will remain untainted. It might be the same time as the half life of hydrogen-7.

Rick C.

Tesla referral code --

formatting link
Get 6 months of free supercharging --

Reply to
gnuarm.deletethisbit

snipped-for-privacy@gmail.com wrote in news: snipped-for-privacy@googlegroups.com:

So when did you graduate from the SkyBuck school of zero electronics knowledge?

Yes, idiot, I know you were joking.

Reply to
DLUNU

snipped-for-privacy@gmail.com wrote in news: snipped-for-privacy@googlegroups.com:

It is said that even a four bit optical computer will be able to knock the socks off our fastest silicon achetectures.

Reply to
DLUNU

And will only talk to themselves: the new world of solipstronics. ;)

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

But that's wrong. Optical logic will never be technologically interesting, because there's no gain. You can gate many electrons with one electron, but you can't gate many photons with one photon.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

I know as much about photocomputing as a donkey. Can you not start off with lots of photons & accept losses through many gates? Can you not convert to electrickery & amplify on occasion, bearing in mind that a few gates can r un far faster than a whole current CPU of gates?

NT

Reply to
tabbypurr

Could one encapsulate the die in a very thin layer of thermally conductive jelly before epoxying?

Could one use 4 dies closely spaced with wire bridges, permitting each its own independant thermal expansion?

could one pass a nonconductive liquid through the IC package itself, between the stacked dies?

Does that apply significantly to 2 layers? Any gain is gain.

Monocrystalline PV solar panels have huge areas, get hot & seem to be reliable. Is there any possibility of using that in lieu of a silicon wafer?

NT

Reply to
tabbypurr

But you can't make a useful gate without gain. And optical gates are ridiculously lossy--they make ECL look like micropower CMOS.

But that isn't true. The fastest electronic devices (InP) are heading for a terahertz of bandwidth. Silicon circuitry has been limited by speed-of-light delays for a good 15 years now, and that's talking on-chip. You can certainly do ultrafast things like autocorrelation, FROG, and that sort of stuff, but it requires high enough power density to drive optical materials nonlinear. You aren't going to do that in

1e9 places on a chip--even with 1000 of them it would get so hot it would turn to lava, without worrying where all that laser power was coming from.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

The chip already has a very efficient heat spreader/thermal interface material (TIM) combination, connecting to a large heat sink. The issue is the thermal gradient caused by the high dissipation when the chip is running at full power.

Sure, that's been done for ages. The IBM Thermal Conduction Module (TCM), introduced circa 1980, had 100 or 121 chips on a ceramic/refractory metal brick with over 100 wiring levels. But that doesn't save pins.

People have tried doing things like microfluidics, where they etch the backs of the dice to make channels for cooling water. Problem is, they sludge up super easily and there's no way of cleaning them. They also don't coexist with TSVs very well.

Combining a large cache memory chip with a processor might be a win. Most CPU packages have several chips inside.

Sunlight maxes out at about 1 kW/m**2, whereas a 20-mm square processor can dissipate 100W, which is 250 times higher. Thus solar cells don't get as hot as processors unless the concentration ratio is on the same order.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

That explains much.

formatting link
lists the Atom Z560 Silverthorne (45 nm) CPU as 2.13 GHz and only 2.5 W. If correct, that's only 6x the power density of the solar panel. Whether the temp rise could be kept within acceptable limits by a waterfall of noncondu ctive liquid over the face of the die I don't know. Something less corrosiv e than water.

NT

Reply to
tabbypurr

If correct, that's only 6x the power density of the solar panel. Whether th e temp rise could be kept within acceptable limits by a waterfall of noncon ductive liquid over the face of the die I don't know. Something less corros ive than water.

>
Reply to
Lasse Langwadt Christensen

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.