Fastest platform to run ISE?

Hi all,

I was wondering which PC upgrades can make ISE to run faster? For example, can it take advantage of dual CPU, and/or dual-core Xeon, etc.? I am currently running at 2.6 GHz P4 with HT, 2GB 400MHz RAM, 800FSB and it is too slow...

Thanks, /Mikhail

Reply to
MM
Loading thread data ...

One option is to get a amd athlon 64 at 2.4g with 1m cache. You'll definitely see a pretty good jump with that. The other option is to wait a month or so and get a core2 (e6600 or e6700) with 4m cache. That's going to give you a good jump too. Don't in any circumstance get another P4 machine. A third option is to get a 3.6 GHz cpu and upgrade your existing system which should still give you a nice jump. While you're at it get another 2G memory depending on how big your designs are.

Reply to
mk

I mostly use Quartus but expect similar behaviour for ISE. For non-trivial examples, compilation time is dominated by the "fitter" which amounts to placing and routing AFAICT. I look forward to test P&R performance on the new Intel core as it becomes available.

I don't have enough data to conclude which of the many factors have the most impact, but on my designs an

Athlon 64 3500+, 2.2 GHz, 0.5 MiB L2$, 2 GiB dual channel DDR 400

is roughly 10% faster than an

Athlon 64 3200+, 2.0 GHz, 1 MiB L2$, 1 GiB single channel DDR 400 ,

which in turn is more than 10% faster than an

Intel Dothan, 1.7 GHz, 1 MiB L2$, single channel DDR 333

For me FPGA compilation one of very few workloads where performance is still a concern!

Tommy

Reply to
Tommy Thorn

Toms HW recently did a review on the DualCore D805 (around $150 or less) which can be highly overclockable from 2.6Ghz with stock fan to

3.6Ghz with Zalman cooler and even 4.1GHz with water cooling which was then able to beat out the >$1K Extreme edition 965 and the AMD Athlon 64 FX-60 on gaming benchmarks. That article lead to a good run of sales for that chip.

formatting link
formatting link

Intel locked out cpu overclocking, but left in FSB modding. Nominal power is 90W, but at 4.1GHz it went to 200W, takes some nerve to try that.

I got mine with a free Intel mobo with no Bios level FSB tampering but I believe some Windows tools can still change the FSB setting. The article suggest several boards with specific D805 serial versions. I was in too much of a hurry to really bother, but I might try out the

3.6Ghz mod later on a better board. The stock fan was very quite except for occasional spin ups.

John Jakson

Reply to
JJ

But is it worth risking over clocking on this sort of work load ?

How do you know you are not getting errors in the generated bitstream ? How do you know what errors (if any) get introduced as the cpu is running out of spec?

Maybe fine overclocking your own home machine for gaming or whatever but not a work machine.

Alex

Reply to
Alex Gibson

I have switched from an Intel Pentium 4 1.4 Ghz 0.5 GB RAM to an AMD Athlon

64 dual core 2 GHz 2GB RAM, and processing time for my project vent down from about 17 min to about 4 min.....! And it lookes like ISE utillises both cores, according to task manager the load are about 50% om each core during processing...that leaves enough proc. power to read the news without degrading XST performance. Multiple cores is the future !
Reply to
Jan Hansen

I'm using a dual-core Athlon 64 3400 with 1 Gb RAM (WinXP Home). It's

*very* fast running WebPack on all the designs I've tried it on, although I haven't tried it on anything really big. The system wasn't expensive, my local computer shop built it to my requirements.

Leon

Reply to
Leon

Getting the constraints right and reducing the clock speed for parts that don't need high speed clocks works much better to get a design routed fast.

--
Reply to nico@nctdevpuntnl (punt=.)
Bedrijven en winkels vindt U op www.adresboekje.nl
Reply to
Nico Coesel

Depends, these days a cpu lasts what 18 months before a replacement cycle esp in high end engineering. If you "Burn" a cpu by overclocking you possibly reduce the full unused lifetime from several years down a bit, but still it should easily make it through the first 18 months.

One can always diff the files for test cases compiled on a slow normal and overclocked cpu. assuming 2 different cpus should create the exact same bitfile for same setup.

Actually it isn't really running out of spec. If you read the 1st article (it was a very long article though 40+ never ending pages) you would see that most of the cores for these Intel Duo core cpus are very similar and have some varying margin. Intel has to sell the same basic design at different price points so for some families they fix the clock multiplier to prevent cheapskates from overclocking while selling the same basic design in the Extreme edition with a few more options. The article explains exactly why the FSB can be so easily overclocked, it starts off at 133MHz instead of 200MHz or 266Mhz in the higher priced parts. The multiplier seems to be in fractions with 7 or 8 steps to double it to the top of the line case. As long as the DDR ram is also 533MHz. To get past the Zalman cooler at 3.6Ghz, they rigged the voltage as well as water cooled, thats way past my comfort level.

I wouldn't ever recommend overclocking on a part that has no margins such as top of the line chips. Overclocking has always been done on parts that were marketed as low end chips that were much closer to high end chips than Intel let on, although the cache might be smaller in that case. Some of the first Celerons were classic overclockers.

Ordinarily I'd agree with that, I would rather have quiet than turbo jet so I haven't done this yet. In an office situation, I wouldn't recomend water cooling under most peoples desks, but a shared server machine with half the compute time might be tempting.

If a cheap solution can actually blow past the top of the line Intel or AMD models which have no further room, I am open. The benchmarks although gaming related, suggest upto 2x the performance with the water rig, which is much better than the raw 2.66 to 4.1 suggests. The performance is coming from the FSB speedup. That might well follow for FPGA jobs as well. I wonder if we will ever know.

John Jakson

Reply to
JJ

While it is true that both CPUs show about 50% utilization, this is not ISE running multi-threaded. What you are seeing is the load sharing that the MS operating system does with systems with 2 CPUs. Basically, at every system call (disk, screen, mouse, clock, network, or other interrupt) the OS will chose which CPU the task runs on after the return from call/interrupt.

It tends to try and even things out. The graph that you see in task manager is based on the average load of the CPU over a period of time that tends to be longer that the rate at which calls to the OS occur, and so what you see looks like 50%. If the display used faster sampling, what you would see is it bouncing between 100% and 0%, and alternating between the 2 CPUs.

The net affect is that you get approximately the MIPs of 1 CPU at 100%.

The other 100% is available for important things such as reading news and posting articles.

If ISE was multi-threaded, you would see both processors at close to 100%.

Note that this switching back and forth is slightly less efficient due to the cache contents of each CPU. You can force the OS to only run a specific task on 1 CPU as follows:

Once the task is running, open task manager on the processes tab. Select the task (i.e. ISE or XST or P&R) then click the right mouse button to get the context menu. Select "Set Affinity", and you will see a list of available CPUs, with a tick against all of them. Turn one of them off. Your task will now only run on that CPU. If you now look at the graphs, you will see one CPU at 100% and the other is running pretty much everything else.

Cheers, Philip

Philip Freidin Fliptronics

Reply to
Philip Freidin

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.