I realise that, of course, - the comment still stands : Do you have an app that needs to go to -196'C ? Mars is the only deployment I can think of, most others are covered by the terrestrial Military -55'C - 125'C range.
I'm amazed they have achieved this operation over the 0.2V - 3.9V supply range. 0.2V is not much voltage at all... I would have thought transistor threshold voltages would have caused issues at such a low voltage. How are they achieving such low threshold voltages?
Sounds interesting to me as well, perhaps some of the other 90nm manufacturers could shed some light - does it really work? Probably not at full speed, but even at DC - do the FETs really turn on at 0.2 V? While I am not sure what this synchronous/asynchronous gimmick is all about I would say I am glad they may soon have a marketable alternative to the rest of the makers, I guess they all became a bit too big to talk to (and stay innovative).
Even Intel has realized that frequency kills. Not sure why they are so excited about touting 2 GHz.
The fets are running in "active mode" basically just behaving as analog (low gain) transistor amplifiers in their sub threshold regions...kinda on, kinda off, kinda inbetween.
I am sure that the speed of operation is very slow down at 0.2V.
At 3.9 volts on a 90nm transistor, I am guessing the lifetime to breakdown is about a week, or sooner.
I'd like to see them get block RAM, processors, DLL's/PLL's, MGTs, etc. to work in the same fashion. I am sure we all know the stories of the attempts at making async microprocessors, and how they were abandoned for having far too much area, and no real performance benefits.
And when async logic is running as fast as it can, it is going to have 2 to 3 times the power dissipated, as that is how many more wires and transistors are switching. Asnyc when doing nothing is very low power. I just love systems that do nothing: they end up going away (why does anyone care what a system does when it has nothing to do? Just turn it off!).
Their press announcement did say that now that they have the core working, they need to get their (hardened?) IP to work, next.
Without all these bells and whistles that now make up a modern FPGA offering, they are basically back in the XC2064 era: basic fabric, some IO, and no tools.
One other point: their design is about 16X more area (less density) than a modern FPGA. That is going to be a real killer - I'm amazed they have achieved this operation over the 0.2V - 3.9V supply
How cold does it get in space? (where no one can hear you scream)
They'd potentially have another market if the increased the operating temperature well beyond 130'C - downhole applications in the oil industry, for instance.
thanks for the info. So 0.2V being > the treshold voltage is not surprising at 90nm. I had no idea where the breakthrough voltage would be, your mentioning the 3.9V makes me think it is about 5V. Not so bad, come to think we are used to reverse base emitter voltage around
6-7V (and about 3 for some really hf parts) for decades.. :-).
On a side note, what did Xilinx do back then? I doubt they have made the specification of the insides public so other people could write their tools (I keep on dreaming abou that day....), what was it?
From one of Peter Alfke's posts here (May 15th, 2003):
"Silicon is one tough material ! The 125 or even 150 degree limit is more a plastic package issue than a silicon issue. I have helped down-hole (oil-drilling) applications where our chips functioned (with relaxed performance) for many weeks at 175 degree ambient, and the user was pushing for 200 degrees."
The jury is still out on that one, as we noted a couple months back with the ARM.
Actually, when you count the clock in a sync design based on lut/ff it works out about the same, especially when you include the clock distribution network.
I don't see that being any different that having to leave half an FPGA idle when you need to turn up the frequency on your FPGA's because of heat/power limits.
Note they were carefull in the wording : "operated correctly".
I would place 0.2V at a data-retention point, not a clocking one.
CMOS nodes will 'stay-put' at much lower voltages than they will toggle, and there is a clear need for a spec point that allows instant-config from the lowest possible Icc.
Xilinx have been playing a little with this, but so far are a long way off 0.2V
The Vt is defined as the voltage at which the Idsat is X. The transistor has a subthreshold region where some voltage causes some current to flow, well down to the millivolts. It isn't digital at all. It is all analog design...
, your mentioning the 3.9V makes me think it is about 5V. Not so
You would blow the 90nm gate to smithereens in a blink at 6 volts.
-snip-
No, we made our own crude (really) tools, and folks made use of them, and folks started to ask us for better ones, and slowly we figured out how, and provided better and better tools.
No one was interested in making tools for some unknown and unheard of company that was "wasting" transistors. Back then, we were vilified by the semi industry as being quacks, and con artists (almost).
Real semi companies owned their own fab, and carved their own masks on rubylith with Xacto knives.
The estimates come from earlier papers that were published by this group when they were students and professors before they found funding.
Who knows? They don't have anything but slick press releases right now.
Plenty of room for specialty stuff in the market. Just have to pick the right stuff.
Is the extreme DSP market 2B$? 5B$ Or is it 500M$? Who is going after it? All part of the game.
Is it like the structured ASIC market? All hot air, and no money (with folks leaving so fast)?
How does the cost of the next process ASIC affect their business model?
Can they hope to layout in 65nm and release that in 2008 when we are at full production with 65nm -- 65nm will be the 'old' product then? We will beat them on processing power by just being ahead a node or two.
Got to think about that business model: they can use up all that seed money pretty fast fixing masks....only to face a 35nm FPGA that is 1/8 the cost and area, and 4X the performance?
And their claims of space, radiation, etc. means they have to use epitaxial wafers (not bulk CMOS), and they have to be heavy ion immune to latchup, SEL, SEU, SER, etc....that is a much tougher thing to prove!
Our QPRO line is already in space, and does work. Has a history of success. Space folks are real hard to convince to do anything new (believe me, I have tried). No one wants their mission to be the one that is used as the "case study of a disaster."
If I worked for a company that has nothing available to sell, I would really be embarrased if somebody forced me to brag about multi-million gates and 1.93 GHz in one paragraph, and minus 196 degree operation, and 0.2 V to 3.9 V supply voltage in the next. But normally super-critical people that enjoy nailing us for the slightest oversight, just drool... The date was April 24, maybe it was just 23 days delayed... Peter Alfke, speaking for himself
No question. The point is that is their first silicon cut, will likely get smaller over time, and that as you have clearly said in the past that die size isn't everything, but rather it's the cost/performance of the finished product. So your argument why the bloated size of an FPGA in comparision to ASIC equally applies to other products as well -- it's the end customer, not their competitors, that decides if the product has reasonable cost/performance to design their products with.
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.