Async FPGA ~2GHz

Is that how Xilinx enginering and sales feels when they are out pumping interest in next generation product (before finished production parts are available) with select customers looking for advanced design wins?

Reply to
fpga_toys
Loading thread data ...

It's how any reasonable engineer feels looking at those ridiculous claims. Don't tell me you believe them?

Reply to
Eric Smith

Austin,

some more from me,

I was referring to the voltage below which the channel is off, i.e. the cutoff voltage, so we are talking about the same thing. I have extensive experience using FETs in analog designs (mostly JFETs, of course), but I may well have forgotten which letter goes for which parameter. Of course I know you won't open a FET down to its lowest achievable Rdson at 2GHz (I believe this is what you refer to as "saturation").

Yep, apparently so, what I noted to myself was that they could survive (sometimes perhaps not) a short term 3.3V glitch (not that I would like to do it but with all these multiple supply voltages nowadays one just cannot avoid the thought... :-).

This must have been in the pre-LCA time, back in 1990 or so I asked about the programming specs of the LCA and was denied it. I did appreciate the devices then, I still do now, my only problem is that all my design tools are under my control (this gives a world of a difference so many times). Well, the truth is, I have never gone hard enough after the FPGA data, so I don't really know how difficult to achieve this goal is. Some time ago I got some valuable help from Peter on other devices I use, I imagine if I really need something and (as it happens to be) I am not in anyone's way things could be sorted out...

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

Aust> dp,

Reply to
dp

Having spent most the 1980's and 1990's working with early startups, there is another factor that you seem to miss. Startups are frequently captive design/production facilities to larger companies that need that product, and greatly influence the intial product design by stating what they want to buy. When a large early customer says "I need x, y, and z delivered in 100K qty" between two dates, that becomes your product, business plan, and production schedule.

Reply to
fpga_toys

It's not a mater of belief, they will either deliver or not. In 35 years of engineering I've done my share of products that people said can not be done, and delivered to their disbelief.

We have modems today that broke the modulation "laws" set in 60's ... by more than an order of magnitude. We have semi design rules today, that exceeded a large number of "walls" in process over the last 30 years too. There are many more examples of what can not be done, that have be easily broken by innovative engineers.

So, I'm not in a big hurry to claim what can not be done ... I'll wait and see what is delivered, rather than ranting about what is impossble.

Reply to
fpga_toys

I find them plausible - what is hard to believe ?

They do not mean they run at 2GHz _and_ @ 0.2V :)

"operated correctly" is their carefull wording. To me, correct (expected) operation at 0.2V is data retention.

Target Vcc sounds ~1.2V : "is capable of running common FPGA performance benchmark designs at up to 1.93 GHZ at 1.2V"

-- not sure what they were testing at 3.9V, - might have meant the IO buffers, and probably not the Core Vcc!

Also, I take these as Lab-bench-test values, so production margins are not yet added.

These are the sort of numbers you try to keep from marketing, in case they get promised/morphed into performance minimums :)

-jg

Reply to
Jim Granville

formatting link

It sounds very suspicious. A serious startup usually doesn't make public announcements at this stage of the game, they'll show what they have to a few interested parties under NDA. Also there are just to many claims, .2 -

3.9V operation on a 90nm process?

You never know it could be real, but I'd wait until they show something publicly before I got excited about this.

Reply to
Josh Rosen

Or take a real project to them that just isn't possible on other products, and see what's behind the NDA?

Reply to
fpga_toys

Just a reminder: Downhole oil exploration has used Xilinx FPGAs since many, many years ago, running them for weeks on end at 175 degrees C at the bottom of a borehole

200 degr C proved to be problematic. These chips were not in plastic packages, but usually assembled as die in a ceramic hybrid. Of course, we guaranteed no parameters, but functionality did not suffer. (My contacts were always either from Texas, or Paris, or Norway, the three hotbeds of oil exploration) Peter Alfke
Reply to
Peter Alfke

Any ideas what failed / why ? I thought the lifetimes degraded in a simple log scheme, with temperature, and that 200'C would work, but for less time. 'Disposable' probes is really the mindset needed in this market sector ...

-jg

Reply to
Jim Granville

I think it was not a life-time issue, but rather a parametric change that prevented operation. But I do not really know. Most of the oil-drillers where happy and appreciative that they had any configurable logic that worked at all (they also used Z80s and some SRAMs plus some A/Ds.) Brave souls and nice guys... Peter Alfke

Reply to
Peter Alfke

Disposable probes (and other electronics) are common in the oil industry. High temperatures are a problem, but it's actually often temperature changes that are the worst - tolerating repeated temperature changes from room temperature to 125 C can be worse than sustained higher temperatures. The other big problem is vibration - cards get shaken to pieces. So you often get systems specified for, say, 20 hours total run time - after which it is replaced *before* it breaks.

Reply to
David Brown

That's what I thought, but then somebody told me that he might keep the electronics down there, at 175 degrees C, for a few weeks...Slowly cooking... Peter Alfke Have we gotten this thread away from -193 degrees, 0.2 V and 2 GHz, and related Marketing exuberances ?

Reply to
Peter Alfke

What "laws" are those? AFAIK, today's modems are still subject to the Shannon-Hartley Theorem and the Nyquist Sampling Theorem, which substantially predate the 1960s. Perhaps these putative "laws" from the 1960s were promulgated by people with little understanding of information theory?

Reply to
Eric Smith

Without a doubt. "Information theory" and achievable practice would take several decades to mature in the progression of doctorial study, general graduate study, and finally in the 1970's become mainstream undergraduate material.

But much more than that, was a steady progression of improvements in the transmission channel, which started out with attachment limitations of acoustical connection to the carbon mics on 500 desk sets that were the norm in the mid 1960's and into the 1970's. Carbon packing, analog voice bandwidth with loading coils of around 5 KHz, purely analog encode/decode circuits, etc ... all let if common knowledge that the upper limit on consumer modems was well under 1200 baud. Direct connections via a Data Access Adapter would cost another $100/mo and get you up to 1200 baud for a Vadic, or even 1800 or 2400 baud for a $5,000 modem in the mid 1970's. The transition to 2500 desksets during the late 1960's and 1970's allowed for lower loop currents, fewer low pass filters (loading coils) as the 500 desk sets were phased out. With that came slightly higher voice bandwidth, and a new generation of modems which capitalized on it where the new lines with less cross talk and fewer loading coils where installed.

Deregulation of the customer interface, led to a rapid evolution of very low current electronic handsets, and improvements in the cross talk .... plus the ability for consume modems to directly attach to the line. By the mid 1980's bandwidth was available to support 2400-9600 baud modems, and with cheap microprocessors a host of digitial modem technologies emerged, and a series of digital signal processing modems (Including Telebit Trailblazers, and advances phase encoding were developed) which could take advantage of the cleaner phone lines with higher bandwidth in many areas. As digital exchanges became the norm, so did bandwidth, and the evolution to 56K today.

So yes, there were plenty of people who understand the limits of the technology of the day, and were quite able to state what the best achievable modem bandwidth were for that state of the Bell system. But the telephone system improved, digitial technologies emerged, and with it those limits continued to be broke with advancements in the medium, coding technology, compression, error correction, etc .... none of which were visible in the 1960's or 1970's when everyone "knew" just what the fastest modem was that could be built assuming best case engineering practices.

Just as theory existed that man could fly some 1,000 years before the Wright brothers actually proved it, Shannons visionary work didn't enable 56kbps modems to be even dreamed of in 1948, when the regulatory, technology, and bandwidth limits were only a few kbps using best possible practice, and a huge fraction of that in reality.

Reply to
fpga_toys

And by the way ... I still own 3 of my first 4 modems ... I sold my

110baud ASR33 with built in coupler back in the early 1979's to buy my 300 baud TI "Silent" 700 and an original AJ Oak box 300 baud modem (I still have the AJ). I also still own my Vadic 1200 green shoebox, that connected via a Pacific Bell installed DAA from 1977 to 1979 so I could dial long distance into work from San Luis Obispo to Menlo Park. I also still own my two Telebit Trailblazers that replaced it a few years later.

Even as late as 1999 my rural home phone lines would only support

14.4Kbaud connection rates on a good day ... and 9,600 more than likely, so the Telebits remained useful for years after they were out of normal use. That year I started a wireless internet cooperative to get broadband via early Aironet 802.11b radios costing $1,200/user for UC4800, LMR600 and Conifer T24 dishes.

I also still have my 1976 home computer (LSI11/03) and my 1980 home computer (LSI11/23 with V7 Unix) and the Fortune 32:16 and LSI11/73 which replaced it. I also still own the TRS80 Model 1 that used to develop the Z80 firmware for the Ampex TMS100 9-track tape formatter I built for the LSI11/03 in 1978 as well, and a lot of other interesting period toys ... including a desktop smoked plexiglass PDP-8 lab machine, a pair of LSI11/03 based VT71 DEC terminals, a modest collection of ADM3's, LA21 DecWriters, and other period toys.

I wish I had purchased the 1401 tape system I first programmed on when it was salvaged :) Or one of the Bendix G15's I used later. But that is another story :)

Reply to
fpga_toys

For those who are historically challenged, and did not live thru the

1960's and 1970's with modems:

formatting link

Note the conclusions on Page 104:

"In the early 80's, however, it was believed that 9600 bit/s would be the ultimate practical limit."

In the early 1970's it was considered that 1200 baud would be the limit for single pair dialup, and a bit higher for split conditioned 4 wire data circuits.

Reply to
fpga_toys

FWIR, what enabled them to sidestep the apparent limits, was the Group Delay equalisation, and the ability to trade-off Signal/Noise for more apparent bandwidth. [= DSP and better Codecs ]

That takes things in rather a different direction than Nyquist/Shannon...

This has fed-on into ADSL (which also keeps advancing), and I think

10GBd Ethernet uses the same ideas.

-jg

Reply to
Jim Granville

Hi Jim, I learnt from some friends over lunch a while back, better coding helps! Today's DSP power combined with better receivers using Turbo Codes or Low-Density Parity-Check codes approach the Shannon capacity limit very closely. Wikipedia has some stuff about these codes. Cheers, Syms.

Reply to
Symon

I wrote:

[... long description of changes to phone system ...]

OK, but that still doesn't explain what "laws" were broken.

I was an engineer at Telebit, the company that introduced the world's first 18Kbps dialup modem in 1985 (though I didn't join the company until 1991). Telebit's patented PEP modulation is the forerunner of OFDM, now widely used in many digital communication systems.

But I don't recall any claims that the Telebit modems were breaking any "laws", nor were the Bell 212A/V.22/V.22bis modems that came before, nor the V.32/V.32bis/V.34 modems that came after.

V.90 is somewhat of a special case, in that it does not work between two analog subscriber lines.

"Common knowledge" is much different than "laws".

Telebit's modem technology worked fine on phone connections much worse than what the US had in the 1960s. It was widely deployed in countries with poor telephone infrastructure, where modems using standard modulation technology were useless.

Several of Telebit's modem engineers speculated that it should be possible to run PEP over tin cans and string, and still get a stable (but relatively low bit rate) connection, but AFAIK the experiment was never actually conducted.

Eric

Reply to
Eric Smith

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.