fastest FPGA

Ray,

I agree. Good post.

Totally_Lost is, unfortunately, lost... totally.

Aust> Totally_Lost wrote:

Reply to
Austin Lesea
Loading thread data ...

So ... you are claiming any valid design will run in any Xilinx FPGA at max clock rate?

Reply to
Totally_Lost

Excuse me .... "Over clocking" is defined as clocking faster than MFG Rated Specs. Certainly running any Xilinx FPGA past speced clock rates, is clearly "over clocking". In Fact, Over clocking is by definition the practice of taking parts (CPU, Memory, etc) to the edge of the process where it fails, and backing it off so it doesn't.

Who is lost? totally .... ????

IFF, the spec's completely existed and were published openly.

Reply to
Totally_Lost

$2K of recycled Virtex parts makes a pretty nice machine :)

Reply to
Totally_Lost

or pumping/pimping their stock price so their options look better, to cash out.

outlandish claims FPGA's can not be over clocked, just use them till they fail, and back off are just short of lies. Doesn't take other effects into effect, like thermally induced early life failures when running at max die temp and way past design currents with AGRESSIVE cooling (AKA holding the case temp at -60C in a moisture free atmosphere).

When the only spec for safe operating range is die temp ... I suspect that doesn't include running the parts on a dry ice heat sink, before vccint pin currents are a problem.

Reply to
Totally_Lost

Since Austin is technically and english language challenged, here is an aid to decrypting this bull shit claim that Austin proudly would object that it's only ammonium nitrate .... hehehehe

The Free On-line Dictionary of Computing (27 SEP 03) [foldoc]

overclocking

Any adjustments made to computer hardware (or software) to make its CPU run at a higher clock frequency than intended by the original manufacturers. Typically this involves replacing the crystal in the clock generation circuitry with a higher frequency one or changing jumper settings or software configuration.

If the clock frequency is increased too far, eventually some component in the system will not be able to cope and the system will stop working. This failure may be continuous (the system never works at the higher frequency) or intermittant (it fails more often but works some of the time) or, in the worst case, irreversible (a component is damaged by overheating). Overclocking may necessitate improved cooling to maintain the same level of reliability. (1999-09-12)

Reply to
Totally_Lost

I can think of many lab situations where 'Over clocking' is both real, and desirable.

If you want to do this in a FPGA, you should be able to, by design, include a deliberately-poor critical path, that can give information about the Vcc/Temp/MHz threshold. Scatter more than one, about the die, if you expect thremal skews.

Then use this information to control your agressive cooling, ( water / freon pumps anyone? :) Clock speeds, and Vcc.

This type of test can also be valid on a full production design, to verify you DO actually have a good margin on operation.

I see a number of 'smarter' voltage regulators/controllers recently released, that are designed for exactly this type of verify-at-the-margins design.

Of course, having this capability would let you see right thru the 'stamped speed grades', so I can understand FPGA vendors might want to claim there is no such thing :)

-jg

Reply to
Jim Granville

Reply to
Peter Alfke

Certainly ... shouldn't be any other way.

Simply if Austin wants to be insulting with "lost totally" comments, then that's the way he and Xilinx seem to be expecting to be treated as well when he is being clueless.

Reply to
Totally_Lost

Just because Austin (wrongly in my opinion) wrote: "There is no such thing as "over-clocking" a FPGA: either it meets timing and works, or it doesn't." is no reason to engage in this silly tirade. With the exception of (as usual) Ray Andrake, nobody posted here anything that had any redeeming value whatsoever. I think most of us have a better use of our time. Peter Alfke

Reply to
Peter Alfke

For reference, a couple years back I had a couple XC2000E-8 parts in Xilinx BG560 proto boards running a hand packed bit serial RC5 cracking design at max clock rate ... petlier cooled, but could never get it power stable, even after augmenting the power bypass caps and buss wire to the socket. At first I blamed the sockets. Then put four of them on PCBs and tried again .... never could get it stable anywhere close to rated clock rate.

A high count of LUT shift registers for the SBOX retiming chewed more power than the chip could easily handle as far as I can tell.

Later another heat simulation modelling engine using LUT based shift register memory and bit serial math was unstable as well for VCCINT power reasons (no active IO) despite more than expected power decoupling, AND agressive cooling.

AFAIK, both designs were well inside ISE reported timing margins, and valid designs from a tools perspective.

Now maybe agressive use of bit serial LUT shift register memories will not send hand packed compute engines with large V4 and V5 parts over the edge, like the XCX2000E and XC2600E parts, but I'm pretty sceptical. Would love to get my hands on a Xilinx Board with XC4VLX200's on it, and give it a try. I'm not sure there is a valid cooling and power solution to reach that point for these packages. I'm pretty sure you agreed once these parts will not run an alternating

10101 pattern if fully loaded with LUT shift registers coupled out the DFF's. While not useful, that is a valid design ... and actually not that different that either of the applications that fell down using XCV2600E's.

If you, Austin, and Xilinx say XCV2000E/XCV2600E parts fully hand packed with with bit serial compute engines should easily be stable with solid cooling, I for one, will remain skeptical.

If you want to say the XC4VLX200's packed the same way can be made stable, or similar large XC5VLX parts, then I'll be happy to go back to the client with that guarentee if Xilinx will stand behind it 110%.

Reply to
Totally_Lost

Reply to
Peter Alfke

Agreed ... but valid. On the other hand high density bit serial compute engines are not PERVERSE, but rather a clear valid production design ... if they would work. Their bit toggle rate isn't very far behind worst case of 10101.

Now, Austin claims that valid designs will work at this density ... and these are valid designs. Am I totally lost, or is Austin blowing smoke and being purposefully insulting by claiming these designs will work if done by Ray at these densities?

A while back I had a nice long chat on the phone with Ray about this problem ... I don't think "ANYONE" ... even Ray, can make it work.

Unless there is some God called "Anyone" that you are hiding in the wings.

Now .... are you agreeing there are "valid" designs, real world production designs, of this class that you and Xilinx KNOW will not work? ..... ummm agreeing that my concerns after wasting nearly $6,000 in a board design are hard learned limitations of the Xilinx product? That Austin insultingly claims as false statements?

Reply to
Totally_Lost

The OP, with what sounds very like a lab experiment, asked if FPGA could be overclocked.

Austin replied with: >Austin Lesea wrote: >>There is no such thing as "over-clocking" a FPGA: either it meets >>timing and works, or it doesn't.

So I (and others) pointed out, there IS such a thing as overclocking, and I gave some instances where one might consider doing that.

I'm not sure the intention of the OP was to complain if it failed, his was the 'how fast can I do this' question.

I don't see much 'argumentation' - I think your comments agree with mine, that there IS a guardband (margin) in the design, and so you can overclock. ISTR you have mentioned "GHz-bench operation" frequency counters in FPGA ?

Of course, overclocking is entirely in the 'customer care/no warranties' pigenhole, but I think Austin needs a little care with his sweeping statements.

-jg

Reply to
Jim Granville

Peter,

It's not enough with such a project to scan just to two leading manufacturers. In stead, you need to scan all, because the others (i.e. Lattice, Actel, Quicklogic) can just have the feature you need. I'll give an example: on the generic I/O's, both Altera and Xilinx can't get higher than 1.3 Gbps. Lattice's newest SC get I/O speeds up to 2Gpbs. (I can't comment on Actel's speed as I have never used them)

At the logic side all three have about the same speed. As you will know, the highest system speed will depend on the design constraints (and also how well the tools are and how well you know the features). I shouldn't use a microprocessor (even not a soft core) as it is only addtional load (and taking away your resources).

PS. why don't you mention the V5 - it should be intrinsically faster?

Regards,

Luc

Reply to
lb.edc

Mr Lost,

I'm afraid you don't know what you are talking about as far as FPGAs go. The clock rate for an FPGA design depends heavily on the design. It is not like a microprocessor where you have a fixed hardware design that has been characterized to guarantee running at a specific clock rate. Instead, it is up to the FPGA user to perform a timing analysis on his design to determine what the maximum clock rate for that design is. The max clock rate depends on the logic and routing delays for that design. As part of the due diligence for the design, the designer needs to perform a timing analysis which in turn gives you a minimum clock cycle time for which the design is guaranteed to work. Overclocking then only makes sense in the context of that design. If you clock it faster than the minimum cycle time found in the timing analysis, then you are overclocking the design. This is usually considered poor form for hardware design, but it can certainly be done if you are aware of the risks. That said, in laboratory conditions, FPGA designs can usually be overclocked by 10 or 15% of the max clock frequency for that design as found the timing analysis.

The maximum toggle rate in the data sheets only tells you what the flip-flops in the fabric are capable of doing reliably over the temperature range. That doesn't take into account the propagation delays for the routing or combinatorial logic surrounding those flip-flops....those parameters, which to the user are far more important than the max toggle rate (that number is mainly for the benefit of export restrictions), weigh heavily on the specification for the user's design.

Attempting to use the max toggle rate of the flip-flops to define overclocking would be like trying to define the overclocking of a CPU in terms of the switching time of a transistor on the die rather than the aggregate that comprises the useful circuit. The difference that is probably confusing you is that the CPU is characterized as a completed circuit design, similar to doing the timing analysis on a placed and routed FPGA design. Overclocking then is clocking the design at a clock rate faster than the clock rate the design was intended to be clocked at. Overclocking does not make sense outside the context of a specific design.

Reply to
Ray Andraka

OK,

The Virtex 4 family is the first family to ever be able to shift 1,0 through all resources (without tripping power on reset from the transient of all switching on the same edge).

Cooling it is another matter, as many have stated.

My comment on over-clocking was intended to say that we are completely unlike a micro-processor, and the traditional tricks that you read about to get a microprocessor to work faster are not likely to work, as we have far more complex timing paths in a customer design.

You appeared to live up to your name, that was all I was observing.

Sounds like you do know something of what you speak. Sorry if I thought you were (totally) ignorant. Given the name, and the posting, it was hard to tell.

Getting back to the 2000E, I remember that we had quite a bit of difficulty with the speeds files for that part. Something about them being unable to model some of the paths accurately.

Generating the speeds files is sometimes difficult, and the software trying to model the hardware can be just plain wrong.

With Virtex 4, and now Virtex 5, we no longer allow the software to "invent" ways to model the hardware: the process instead forces the modeling to match the hardware. Tricky business.

So, even this perverse design (in V4, or V5) is now able to run, in the largest part.

I still do not recommend it, as the THUMP from all that switching leads to not being able to control jitter, even with the best possible power distribution system. I believe that V4 and V5 will require some internal 'SSO' usage restrictions, as they are not like earlier devices which would configure, DONE would go high, and then immediately reset and go back to configuring if you tried to instantiate and run a full device shift register.

Austin

Reply to
Austin Lesea

Ahh, This must be John Bass. I thought I recognized this particular rant. I guess he changed his screen name from fpga_toys or whatever it was. Yes, you can make an extreme design that will dissipate around

100W, which would be a real challenge to power and keep cool. That is really a pathologic case though, real world high density high clock rate designs tend to have average toggle rates of 20% or less. Bit serial designs have toggle rates that are a bit higher, but still usually well under 50%. I don't see dissipations more than about 20-25W, which can be handled with proper thermal design on any of the large FPGAs. In most cases, I'd say the average dissipation I've been seeing on large aggressive designs (2V6000, V4SX55, 2P70) is between 10 and 13 Watts.
Reply to
Ray Andraka

If you look back Ray, you will find that I have used Totally_Lost as a handle far longer than discussions in just this forum ... as it's excellent flame bait to draw out the bigots that only think they know what they are talking about. In fact, I've used this handle in other forums going back over 20 years. It was probably Totally_Lost that first picked the specification, power limits, and cooling issue discussion with Austin and Peter with this same objection based on the

2600E failures using heavy LUT shift registers.

Now, when there is an extreme design that can pull 100W, that means there are data paterns that the design will have very high one, or more, clock currents that have to be handled in the design. Designs based on a bet that you can handle the average caseok, and just hope worst case doesn't occur, just defer problems to random in field failures.

Reply to
fpga_toys

Sorry Ray ... but your explaination about NOT overclocking an FPGA's ignored the reason for overclocking any VLSI part .... to avoid the design margin for worst case thermal/voltage/process in applications where it's not needed, and thermal/voltage/process can be controlled tighter with cooling, voltage selection, and hand selection of parts.

Sure ... a production design needs to adhere to the margins, a hand tuned lab design doesn't, when it's specifically set up to optimize the operating envelope based on controling these factors to gain performance.

By the Way, your bit serial fpga math page was part of the resources I used to setup the 2000E and 2600E design failures after hand packing the RC5 and heat flow simulation designs, both of which were better than 50% LUT SRL schematic macros. In the speed/area optimization for highly replicated compute cores, bit serial, or digit serial frequently beats out fully parallel designs. And unfortunately, is the most dangerous operation area for Virtex parts using LUT shift register memories heavily. Your page is a great resource for newbies, but probably should include some warning notes about power and LUT SRL designs.

After sorting that problem out, PAINFULLY, I did later rework both designs based on LUT RAMs that was still bit/digit serial and avoided the high heat/current of the LUT SRL's. Using Gray Code syncronized counters. locally replicated to avoid fanout and routing skew, I was finally able to get both designs functional, but lost some performance/density in the process.

Sorry you got sucked in by the Totally_Lost handle, as you are a wonderful resource, as is your web page .... but slaming posters based on name, origin, school/work place, and just plain ignorance is pretty poor form reserved for shit head bigots.

Reply to
fpga_toys

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.