Intel tri-gate finFETs

They wrap the gate around 3 sides of a fin-shaped channel...

formatting link

"While not the first manufacturing technique to produce fully depleted transistors [...] it is the cheapest [...]"

-- Cheers, James Arthur

Reply to
dagmargoodboat
Loading thread data ...

Finfets have been around for a while now, a decade at least.

Intel has always had superb silicon process and crappy CPU architectures, and I bet ARM is terrifying them. Moore's Law is going to hit the atomic limits soon, and x86 will be in trouble.

Why they dumped their ARM products, I can't imagine.

John

Reply to
John Larkin

Yep, a decade, roughly, but they've never been in production. Th news releases yesterday said Intel is going exclusively to finFETs, for everything! That's amazing.

I couldn't find any really good technical articles, but the pop-stuff said the FETs are fully depleted, which would really be somethin'.

I dunno either, haven't kept up with that. I do love the bumper crop of tiny, low-power CPUs though.

James

Reply to
dagmargoodboat

formatting link

Maybe they were costing them an ARM and a leg...

Reply to
Robert Baer

Depleting the channel will reduce leakage currents greatly. Congratulations on the low power solution. Previously, substrate bias was used to reduce leakages in active circuits and even more extreme bias voltages were applied to circuits in standby. With 3D, the channels are above the substrate!

Reply to
Globemaker

You are an idiot.

This is why you are an idiot. If they have such 'crappy' CPU architectures, why would one they sold be such a threat?

Could it be that you have no clue about what is or is not a good architecture, and that your word salad is nothing more than an insult to one of the greatest chip houses ever to exist?

So, you have a fetish as well, John. By your own criteria.

I told you what your posts are one day, and you attempted to claim I had a fetish as a result of the description I gave your posts. You take 'pathetic punk' to an all new low, John.

Have a nice life, you pathetic punk.

Reply to
The_Giant_Rat_of_Sumatra

formatting link

It was before the gadgetry wave.

That's OK. They have low power offerings already that were to be competition for that market, and the new process will make that even more attractive a series.

Reply to
The_Giant_Rat_of_Sumatra

You think x86 is a decent CPU architecture? It's a lineal descendent of the 8008, which was itself barbaric the day it was born.

It's no accident that most mobile devices use ARM these days. And that they have huge battery runtimes.

John

Reply to
John Larkin

You're always fighting Laplace's equation when you violate scaling (as you have to nowadays). The problem is getting enough E field in the channel to deplete it. This used to be easy, because you could always make the gate dielectric thin enough, but not anymore, because of tunnelling through the gate oxide.

Using high-k gate dielectric helps a lot, because less of the gate voltage gets dropped across the insulation before it has a chance to do anything useful. That helps the depletion and also helps reduce the tunnel current.

FinFETs are another approach, where you wrap the gate around three sides of the channel. Laplace's equation is a lot more friendly in that geometry. Processing them has been a nightmare until recently, though. Bravo to Intel for getting it figured out.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal
 Click to see the full signature
Reply to
Phil Hobbs

Intel--simple, crude and fast. Not a bad way to get in first, but it's a lot of baggage to carry later.

(Am I the only one who can't help noticing "Sumatra" anagrams into "traumas"?

James

Reply to
dagmargoodboat

It was simple once, but it's not any more. It's a register-poor, register-quirky CISC instruction set. In order to get speed, they have to pipeline it heavily, scoreboard registers, and execute instructions out of order. So a modern x86 CPU is actually doing a very complex emulation of the barbaric x86 instruction set. That takes a lot of silicon and a lot of power.

ARM has lots of general-use registers and can execute an instruction per clock, without melting the silicon. Since the GHz race is pretty much over, the future is low power and multicore. ARM will win that game.

You can buy ARM chips for under a dollar. $8 or so gets you a full

32-bit chip with ethernet, seven uarts, SPI, timers, ADC, sram, DRAM controller, flash controller, 250 MHz core, and SIMD vector floating point.

John

Reply to
John Larkin

No. Complex, advanced, and fast.

A turkey like you wouldn't even know, how, why, or even when they 'got in' much less whether it was 'first' or not. What? You think they did not have any mil roots?

Like the presumptuous character idiots like you form in your old age?

My point as stated above, exactly. It is an observation that extends beyond stupidity, though it gets beat by you telling about it here.

If their stuff is so bad, why are folks shelling out well north of $1000 each for their 'crappy' (as Johnny 'fetish boy' Larkin called it) work?

Reply to
FatBytestard

Fewer and fewer people are dumb enough to pay $1000 for a power-hog x86 chip. Apple is already using custom ARMs in most of their products, and rumor is that their desktop computers are next. ARM and Linux own the cell phone and tablet business. Server farms will migrate to ARM to save power.

Linux is open source and free, and ARM is almost the same: there is a license fee, but you get to put the architecture on your own chips, with your own graphics and peripherials, as Apple and everybody else is doing.

x86 architecture is almost 40 years old now. Windows is pushing 30. People are losing interest in paying kilobucks for obsolete captive technologies.

It's time for change!

John

Reply to
John Larkin

Yes, that's the "lot of baggage to carry later."

Imagine all that done in finFETs...

I wonder why Intel dropped the ARM thing. Probably marketing. Getting people off the Intel instruction set would've been riscy.

James

Reply to
dagmargoodboat

Intel really needs that new transistor to compete with e.g. ARM Cortex- A9:

Intel Atom vs ARM:

formatting link
formatting link
Quote: "... So with a quad core Cortex-A9 at 800MHz you would get 2.5-4x the performance of a single core Atom at 1.6GHz while still using less power... ... Atom's lowest idle power is 100mW!!! - this is in the deepest sleep mode doing absolutely nothing! That drains most batteries rather quickly ... Compare this with the idle power of ARMs which is typically well below

1mW. ..."
formatting link
Mar. 07, 2008 Analyst smashes the Intel Atom:
formatting link
-

Is ARM what I want?:

formatting link
Quote: "... The White Paper "ARM Cortex-M3 Processor Software Development for ARM7TDMI Processor Programmers" by Joseph Yiu and Andrew Frame highlights the differences:
formatting link
... I myself still use small pics for small projects. But i have decided to only use ARM for the bigger projects. Like you i wanted to make the big step to an advanced architecture with more possibilities. ARM is for me personally the way to go. To sum it up why : # Long standing widely supported architecture. # Multiple manufacturers provide similar chips, handy when you run into an out of stock situation. # Large supportive community. # Any knowledge you gain can be used at your profession if you are in the electronics/programming world. # ARM is busy designing ARM cores that run at 2GHz and higher. # Free GCC compiler. # Lot's of people have experience and problems are more easily solved that way. # Errata are discovered faster and information is shared by the community. # Usually the faster chips with ARM cores have standard buses for camera's and / or audio buses. # Wide selection of support chips. I am sure i have forgotten a few... ... As has been said time and again, setting up the tool chain is the hardest part. ... I set this up more or less following the Jim Lynch tutorial (http:// gnuarm.alexthegeek.com/) and using known "good" code/projects/ makefiles for ARM7 just to make sure I could compile something. ... I started with the EK-LM3S1968 (Cortex M3) dev board from TI with Code Red IDE. (this board is programmed debugged over USB, great thing is it is ALSO a hardware programmer/debugger! I am using an opensource toolchain Eclipse, GDB, OpenOCD, with Olimex USB Tiny programmer. It took a week to figure it all out, but now it works great, debugging with breakpoints etc. I have written a wiki on the entire procedure of setting up the toolchain (if anyone is interested Ill post, otherwise ill get to it eventually ..."

Reply to
Glenn

Another technology that has spreads through almost all of Apples products - transparent use of both processor and graphics chip:

Grand Central Dispatch:

formatting link
Quote: "... Grand Central Dispatch (GCD) is a technology developed by Apple Inc. to optimize application support for systems with multi-core processors and other symmetric multiprocessing systems.[2] It is an implementation of task parallelism based on the thread pool pattern. It was first released with Mac OS X 10.6, and is also available with iOS 4. ... A task can be expressed either as a function or as a "block."[9] "Blocks" are an extension to the syntax of C, C++, and Objective-C programming languages that encapsulate code and data into a single object in a way similar to a closure.[7]

Grand Central Dispatch still uses threads at the low level but abstracts them away from the programmer, who will not need to be concerned with as many details. ... Examples ..."

Used technologies:

LLVM:

formatting link

The LLVM Compiler Infrastructure

formatting link

LLVM presentation at Google given by Chris Lattner:

formatting link

formatting link
Quote: "... To the degree that Mac OS X 10.6.0 is more bug-free than the previous

10.x.0 releases, LLVM surely deserves some significant part of the credit. ... By committing to a Clang/LLVM-powered future, Apple has finally taken complete control of its development platform. ... Anything that touches the file system may stall at the lowest levels of the OS (e.g., within blocking read() and write() calls) and be subject to a very long (or at least an "unexamined-by-the-application- developer") timeout. The same goes for name lookups (e.g., DNS or LDAP), which almost always execute instantly, but catch many applications completely off-guard when they start taking their sweet time to return a result. Thus, even the most meticulously constructed Mac OS X applications can end up throwing the beach ball in our face from time to time. ... Well, what if I told you that you could move the document analysis to the background by adding just two lines of code ... Thus far, I've been discussing "units of work" without specifying how, exactly, GCD models such a thing. The answer, now revealed, should seem obvious in retrospect: blocks! ..."

August 31, 2009 Mac OS X 10.6 Snow Leopard: the Ars Technica review:

formatting link
Quote: "... Clang does not yet support some of the more esoteric features of GCC. Clang also only supports C, Objective-C, and a little bit of C++ (Clang(uage), get it?) whereas GCC supports many more. Apple is committed to full C++ support for Clang, and hopes to work out the remaining GCC incompatibilities during Snow Leopard's lifetime. ... Clang compiles nearly three times faster than GCC 4.2. ..."

formatting link

formatting link

June 20th, 2008 Apple=92s other open secret: the LLVM Complier:

formatting link
llvm_complier.html
formatting link

LLVM is one of the best pieces of open-source software available; check it out!:

formatting link

LLVM: A Compilation Framework for Lifelong Program Analysis & Transformation

formatting link

020075abs.htm
Reply to
Glenn

Clang:

formatting link

Status history:

formatting link
Quote: "...

  • 25 October 2010 Clang/LLVM able to compile a working Linux Kernel. [22]
  • January 2011 Preliminary work has been done to support the draft C+
+0x standard, and a few of its new features are supported in the development version of clang.[23][24]
  • 10 February 2011 Clang able to compile a working HotSpot Java Virtual Machine[11] ..."
Reply to
Glenn

Hey, what is the CPU and the OS of the computer on which you are typing this?

:)))) Apple is x86 for the most part.

Everything has its price. Nobody gives up good stuff for free.

If anybody could offer sigificant advantages in the cost and/or performance, it would certainly be welcomed by the market. As long as there is not much of a difference, it doesn't matter what is under the hood.

I am yet to see an ARM as capable as 10 y.o. x86. I am yet to see a Linux as usable as Windows 98.

They were preaching it for the last 20 years, did they?

VLV

Reply to
Vladimir Vassilevsky

formatting link

Do you remember the 860 and 960 RISC processors that Intel made back in the 1990s? Those were good processors, and Intel dropped them. I recall hearing then that the reason was because the CISC processor group at Intel didn't want the competition.

I'll bet it's the same thing with ARM.

It'll be interesting to see just how big a bite this'll take out of ARM's arm. Somehow I don't think this is going to be an "ARM killer" -- it'll still be a crappy processor, and the ARM will still be a nice one. All it'll take is for some independent foundry or group thereof to figure out a lower-power process, and Intel processors will be out the door again.

--
Tim Wescott
Wescott Design Services
 Click to see the full signature
Reply to
Tim Wescott

Intel will win because their consumption will drop so low, that they will be able to beat them on speed and function, hands down.

Reply to
Chieftain of the Carpet Crawle

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.