Suse 9.1 Linux and Xilinx ISE 6.2i

Hello,

I just installed Xilinx ISE 6.2i on a Linux box and it is sluggish as anything. Does anyone know why? I am running on a P4 1.7GHz w/ 1GB of RAM. On windows, it is much more zippy. Could it be the gui toolkit that Xilinx is using (it seems like JAVA.......slow as a slug....)?

Thanks.

Salman

Reply to
salman sheikh
Loading thread data ...

Well, I run ISE 6.1i on a dual Opteron 244 w/ 3GIG of DRAM. Just starting the ISE project manager takes 15+ seconds. And it's amazing that such a fast machine can feel so slow. impact is also embarassingly slow to start up.

It looks like JAVA widgets, although I've also seen hints of compatibility libraries in use. I'm not privy.

The GUI stuff really is gawd-awful slow, but other than fpga_editor and iMPACT, I stick with command line tools. My impression there is that the command line tools work just fine.

--
Steve Williams                "The woods are lovely, dark and deep.
steve at icarus.com           But I have promises to keep,
http://www.icarus.com         and lines to code before I sleep,
http://www.picturel.com       And lines to code before I sleep."
Reply to
Stephen Williams

Oddly enough, running the Windows version of ISE under Wine/Linux is significantly more responsive than the Linux "native" version... sigh.

--
My real email is akamail.com@dclark (or something like that).
Reply to
Duane Clark

I've been using the Windows version on Windows 2000, running under VMWare emulation, on a Mandrake Linux OS. Some people tell me it is slower than running just a native Win OS and the application, but I don't seem to notice the difference. (Just one small data point.)

Jon

Reply to
Jon Elson

I've noticed that things run faster under wine also, the native versions seem to have some horrid windowsesq gui toolkit that spends rather a lot of time doing DNS lookups for every window/widget it needs to draw.. I notice that the Java based Coregen is much more responsive that the rest of the system.

Commandline tools fly, we place and route on a duel hyperthreded xeon (4 logical cpus) and setting 4 designs off in parallel gives impressive performace, made the time spent building large Makefiles worth while.

Reply to
Marc Kelly

I'm surprised that they ran on SUSE 9.1 at all. The GUI tools don't work on Mandrake 10.0, I'm still using Mandrake 9.2 on my workstation because of this. The only GUI tool I ever use is FPGA Editor and that works OK even though I'm using an old machine, 500MHz PIII with 512M RAM. I do everything else with CLI and that works fine. The thing that you absolutely can't do is run the GUI tools remotely, the performance over an ethernet is horrendous. Everything else I use, Cadence's NC, Mentor's ModelSim, both work fine on Mandrake 10 and there is no performance penalty when running them over a network. Hopefully Xilinx will switch to a decent toolkit in future releases, one that isn't tied to a particular distribution and that has reasonable performance.

Reply to
General Schvantzkoph

Makefiles are good anyway. Think of them as documentation. Beats scraps of paper with notes about what you have to do in the GUI to get the right answer. Doubly so if some of the GUI flags are sticky so you only have to do it "once" and it doesn't get added to the checklist.

--
The suespammers.org mail server is located in California.  So are all my
other mailboxes.  Please do not send unsolicited bulk e-mail or unsolicited
commercial e-mail to my suespammers.org address or any of my other addresses.
These are my opinions, not necessarily my employer's.  I hate spam.
Reply to
Hal Murray

I also couldn't run ISE 6.2i on Mandrake 10.0 but would be interested in trying under wine. Can some one point me to an install procedure to get in working under wine.

Thanks, Tom

Reply to
Tom Dillon

: Commandline tools fly, we place and route on a duel hyperthreded xeon (4 : logical cpus) and setting 4 designs off in parallel gives impressive : performace, made the time spent building large Makefiles worth while.

I'd appreciated if you would post a simple command file.

Thanks

--
Uwe Bonnes                bon@elektron.ikp.physik.tu-darmstadt.de

Institut fuer Kernphysik  Schlossgartenstrasse 9  64289 Darmstadt
--------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------
Reply to
Uwe Bonnes

Makefiles/Perl/CSH/etc. are great at times for FPGA implementation, especially when you are doing something really unique with the way you are running the tools but if anyone wants to run the Xilinx tools from command-line, the easiest way is using xflow. Xflow is a single command that can run the Xilinx tools from HDL code to bitstream and most everything in between including simulation netlisting. An example command running xflow is the following:

xflow -implement high_effort -tsim modelsim_verilog .edf

That command will take that EDIF file run it through all the of the implementation tools to take it through place and route with a high effort level, create a static timing report and create a timing simulation model for ModelSim in the Verilog language. Or if you want to run synthesis to bitstream, you could try:

xflow -p xc2vp7fg456-6 -synth synplicity_vhdl -implement balanced

-config bitgen .prj

Here it will synthesize a VHDL project through Synplicity targeting a

2VP7, implement the design using medium effort (balancing runtime and CPU effort) and then create a bitstream. The prj file contains all of the VHDL files for synthesis. There are many other ways to run and customize the tools. You can also specify customized options for any part of the flow if the blanket options are not to your liking.

I thought I would share this as xflow has been around for a long time now but it seems not many know about it. I use it quite a bit to run the tools, especially when running the tools remotely (i.e. logging onto my Linux machine at work from home and running a quick nohup xflow run).

-- Brian

Reply to
Brian Philofsky

While I am using RH9 rather than Mandrake 10, I suspect that to run the Linux versions of the tools you need to do:

LD_ASSUME_KERNEL=2.4.1 export LD_ASSUME_KERNEL

You can add that to the "settings.sh" script that Xilinx provides.

For the Windows version, you should be able to run the Xilinx installer directly under Wine. I suggest using a December 2003 version of Wine for now. Some changes have been made in Wine since then that seems to break some things in the Xilinx tools.

Be aware that running ISE under Wine, processing will run very slow without a patch to the Wine source (a bug in Wine's named pipe implementation). The command line tools however run fine. I will be happy to provide a patch to Wine if you want to try running ISE, which of course means you will need to use the Wine source, rather than a binary.

--
My real email is akamail.com@dclark (or something like that).
Reply to
Duane Clark

There are two parts to makefiles. One is the sequence of commands needed to recreate something. The other is that is collects all the parameters/options/flags in one place. Most software geeks consider the makefile to be a source file and include it with the other source files in some sort of source-code control system.

Does xflow (and friends) have a single file where that sort of info is collected?

--
The suespammers.org mail server is located in California.  So are all my
other mailboxes.  Please do not send unsolicited bulk e-mail or unsolicited
commercial e-mail to my suespammers.org address or any of my other addresses.
These are my opinions, not necessarily my employer's.  I hate spam.
Reply to
Hal Murray

When I tried XFLOW, it seemed to want to create a batch file called xflow.bat with all the commands in it that needed running. Next time I typed an XFLOW command, all it did was run the old xflow.bat from the current directory. That had me bemused for a long time - i take it everyone that uses it does so on Unix (where . isn't on the path like it is in windos land).

Speaking of batch file processes - does anyone know how to find out if XST generated any errors - in my experiments, it returns the same ERRORLEVEL every time.

I'm sure I've seen this with other tools as well, which means that the compilation runs through to completion on the old files! One hackaround I've seen from a reputable source is to do your implementation in a clean directory every time, copying in the UCF, EDF eetc. But that seems nasty!

eers, Martin

--
martin.j.thompson@trw.com
TRW Conekt, Solihull, UK
http://www.trw.com/conekt
Reply to
Martin Thompson

Yes, similar but it is slightly different from the general make file process. Xflow will create a .flw and that will list all intermediate programs that need to be run to complete all flows. It lists the executable name, all input, output and trigger files as well as the report files. The file is pretty much self-documented but there is more on it in the manual if you need to learn more. There are also .opt files produced or can be hand-written if you want to create a unique flow. The .opt file does two things. It tells which programs need to be run for a particular flow and allows you to set all individual options for a particular sub-program. For instance, if you want to run the -mhf switch for netgen (will write out a separate timing netlists and SDF for each level of hierarchy in the design), then you can do this in the opt file for the -tsim switch. To take my first example again:

xflow -implement high_effort -tsim modelsim_verilog .edf

If you run this command, an fpga.flw file will be created that will list all sub-programs (i.e. ngdbuild, map, par, etc.) necessary for an FPGA flow. If you want to add another program, say you want to automatically open floorplanner at the end of a run or you want to run your simulator for a timing simulation, you can add those programs to this file if you want however the general flow programs are all there already for you. This command-line will also create two .opt files, high_effort.opt and modelsim_verilog.opt. Those files will list the sub-programs necessary to run a high effort implementation run in the order they need to be run. It also lists all of the default options for the individual programs and allows you to modify or add addition options if you so choose. Similarly, the modelsim_verilog.opt file lists the programs to create a Verilog timing simulation netlist for ModelSim as well as all of the suggested options for that flow. This is where you would add the

-mhf switch I mentioned above to the netgen section of this file if you wanted that capability added to the flow. If you want to complete the custom execution of floorplanner after the run or the running of the simulator after creating the timing simulation netlist, you would add those calls to these .opt files. Again, the .opt files are fairly intuitive, generally you can have xflow create the initial files for your run and then you make minor modification if necessary to get exactly what you want.

Xflow is fairly easy to use and that reason alone is a good enough for me to use it but the other benefit it provides, even if you use other scripting methods is that it somewhat shields you from the minor differences/changes than happen to individual tools in the flow as enhancements are made. As recommended defaults change, as switches are added to reveal new capabilities and as flows change to address new design, implementation and verification methodologies, xflow adjusts to this, not you adjusting your hand-crafted make/CSH/Perl. etc. script. That exact same command above could run the 4.1i tools as it does the

6.2i tools today however if you created your own script to run the individual tools, some adjustments would likely need to be made to do the same. Or lets say you have now added equivalence checking to your verification methodology, all you would need to do is add the "-ecn formality_verilog" switch to the above execution and an equivalency checking netlists will also be produced. Again, it would be more difficult to add this capability to a custom script to run the individual programs.

Try it out. I would be interested to hear your feedback on it.

-- Brian

Reply to
Brian Philofsky

Interesting. I did not know about this. I generally use XFlow on Solaris and now Linux so I have not encountered this. For UNIX machines, it creates an xflow.scr file which is a CSH script of all of the commands however I personally never use it. You can also have it write out the script in TCL if you prefer. If you want, that script can be used to integrate into other scripts or used stand-alone if you want to get away from the xflow "shell". Similarly for the .bat file on a PC but I guess it has this interesting side effect. I will pass this on to that group so that they can evaluate how to get around that problem. Thanks for the feedback.

Don't know about this. I just tried to run XST from xflow where I introduced a syntax error into one of my Verilog files and got back the response:

ERROR:Xflow - Program xst returned error code 6. Aborting flow execution...

There it looks like XST is specifying an error code that you could key off of. Perhaps it is specific to the error or situation you have created.

You could start each run in a clean directory or else build into your script to smarts to either move some of the relevant input files to the next portion of the flow into another directory (backup of previous run) or else just delete them. That way when you get to the next program, it should error out saying file not found if you did not catch the error code before. In my experience however, most programs do specify non-zero error codes when an error occurs and if you properly catch them, you can abort the script execution yourself. When I used CSH as my main scripting language to run the tools, I use to do it like:

ngdbuild $NGDBUILD_OPTIONS $part $ucf $ngdinput >&! ngdbuild.log

if ($status != 0) then echo "Ngdbuild did not successfully finish, do you wish to" echo "check ngdbuild.log for errors." echo -n " Y/N : " set ans = $< set ans = `echo $ans |sed -e 's/ *$//'` if ( "${ans}" =~ [Yy] ) then less ngdbuild.log endif exit(1) endif

That almost never failed me but it has been a while since I have run the tools in this manner.

-- Brian

Reply to
Brian Philofsky

Ahh, my understanding of the way it worked was that the script was created and then executed by the XFLOW executable, rather than that executable doing all the execs itself..

Hmmm, more experimentation required at my end then - thanks for the counter-example.

I could, but it doesn;t feel like the "right way" to do it.

Maybe things are different in a Windos cmd.exe shell... I'll have to look into this some more.

Thanks, Martin

--
martin.j.thompson@trw.com
TRW Conekt, Solihull, UK
http://www.trw.com/conekt
Reply to
Martin Thompson

I'm currently doing this with Make, which interprets non-zero exit status as an error. I've previously done this with bash, with the 'set -e' option, which causes bash to exit whenever a command exits with non-zero status.

You can get to the exit status in dos-ish command shells with %ERRORLEVEL%.

Regards, Allan.

Reply to
Allan Herriman

Not at all. The bat file is supposed to be more for reference than for use in my opinion. There is a lot of "smarts" in the tool that would not happen if it was used in that way. It is a shame they named that file xflow.bat and I am going to suggest they rename it to something like xflowbat.bat to get around the problem you cite.

-- Brian

Reply to
Brian Philofsky

Indeed - that's what I was doing, but in my experiment, XST returned the same errorlevel whether it succeeded or failed... I must have done something wrong somewhere, I'll investigate more when I have some time...

Thanks for confirming what I thought *should* happen! Martin

--
martin.j.thompson@trw.com
TRW Conekt, Solihull, UK
http://www.trw.com/conekt
Reply to
Martin Thompson

Yes, I thought the smarts were captured each time it was run in the batch file.. is that right?

That would be good!

Thanks, Martin

--
martin.j.thompson@trw.com
TRW Conekt, Solihull, UK
http://www.trw.com/conekt
Reply to
Martin Thompson

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.