6.1 vs. 6.2

I just upgraded Xilinx ISE and Chipscope from version 6.1.03 to 6.2.03=20 with the result that my design doesn't work anymore. Well, it=20 synthesises etc, but the result doesn't work as good as it did before.=20 That is, the system doesn't work very good apart from the heart beat LED =

on my board.

Also, Chipscope can't connect through my Parallel Cable IV anymore,=20 which is quite mysterious.

Are there any reasons for me to spend/waste time to trying make it work, =

or should I just swap back to 6.1?

--=20

----------------------------------------------- Johan Bernsp=E5ng, snipped-for-privacy@xfoix.se Embedded systems designer Swedish Defence Research Agency

Please remove the x's in the email address if replying to me personally.

Reply to
Johan Bernspång
Loading thread data ...

Testing message - please ignore: This should be bold Test 2

Reply to
bhavik shah

Testing message - please ignore

This should not be bold Test 2

Reply to
bhavik shah

Testing message

Test 2

Reply to
bhavik

Testing message - please ignore

Plese ignore this reply Thanks

Reply to
bhavik

Consider using simulation and synchronous design as an alternative to logic analysis.

-- Mike Treseler

Reply to
Mike Treseler

Well, Xilinx' new device driver for the parallel cable is obviously more =

concervative than the previous. I have to manually disconnect impact=20 from the cable before connecting the ChipScope Analyzer and vice versa.

After restarting the design from scratch, and adding all cores one by=20 one ISE 6.2 seem to understand what I want it to do. Though the system=20 doesn't work as good as the one synthesized by ISE 6.1 yet.

Mike, in my application it is quite hard to generate a proper test=20 vector (FM-modulated radio signal), whereas doing logic analysis using a =

real radio or a signal generator as input is straight forward and gives=20 good insight on what's happening in the system.

As far as I'm aware, the design is fully synchronous. How would=20 synchronous design be an alternative to logic analysis? Does the=20 insertion of ChipScope cores make the design asynchronous?

/Johan

Mike Treseler wrote:

,=20

--=20

----------------------------------------------- Johan Bernsp=E5ng, snipped-for-privacy@xfoix.se Embedded systems designer Swedish Defence Research Agency

Please remove the x's in the email address if replying to me personally.

Reply to
Johan Bernspång

HDL simulation has a steep learning curve, but the benefits include easy regression for design changes and better coverage for corner cases.

Simulation is an alternative to logic analysis. For synchronous designs, a functional sim and Xilinx static timing is all the chipscope you need.

-- Mike Treseler

Reply to
Mike Treseler

HDL simulation has a steep learning curve, but the advantages include quick regression for design changes and better coverage of corner cases.

*Simulation* is an alternative to logic analysis. For synchronous designs, a functional sim and Xilinx static timing are all the chipscope you need.

-- Mike Treseler

Reply to
Mike Treseler

Mike Treseler wrote:

I wanted to say a few words to this statement. In general I agree with the idea that careful, well thought out synchronous designs with careful, well thought out static timing analysis covers most bases for ensuring a functional end design however there are a few situations that some, perhaps many, often over look that can be easily caught in a timing simulation and can not be easily detected with any of the above methods including Chipscope. One of those such situations is BlockRAM collisions. If you are using a RAM in Dual-Port mode, it is illegal to access the same address on both ports of the RAM within a timing window when one or both ports are doing a write. The obvious situation most think of is the write-write situation where you can not write different data to the same address at the same time and most safe guard against that however it seems the situation of a read-write collision can be for more easily over looked. If you are reading from one port of a RAM and at the approximately same time writing to the other at the same address, there is no tell whether the read data will be the old data, the new data or something in-between. If that read data is used anywhere else in the design it should be considered corrupted yet static timing analysis or on-chip analysis will not show or tell you of this problem. Timing simulation is the only way to properly detect this and this can occur in a completely synchronous design with only a single clock.

Timing simulation also ensures your design is properly constrained and nothing is missed during static timing analysis (is that path really a multi-cycle path? Can you safely ignore timing on this path? Did you miss anything?). As you instantiate more complex components like DCMs, BlockRAMs, etc., that all of the attributes came across the software properly and if you are crossing clock domains that your design will likely operate as you would expect. Are you certain the synthesis tool interpreted your code they way you think it did? Were there delays in the code, missed items in the sensitivity list, misconnnected components or other things in the code to make it behaviorally simulate different than it was implemented by synthesis?

If you are prepared with your testbench to do timing simulations, in general not too much time or overhead needs to be spent at this step and at minimum it will be another check off the list of ensuring the design is properly verified and all things have been considered. Many times it will save far more time than is spent here especially if you do find an issue such as above that can not be easily identified by other verification methods. Also of course with careful design, most all of these situations can be avoided but even the most careful designer can make mistakes.

Just my 2 cents on the subject,

-- Brian

Reply to
Brian Philofsky

Brian,

Great post. We just had the same conclusions at a meeting here yesterday on what constitutes necessary and sufficient checks on designs.

Aust>

Reply to
Austin Lesea

Actually in the newer BlockRAM implementations, you can choose the behaviour (at least you can on the Spartan-3, whose datasheets I've been poring over recently :-)

There's:

o READ_FIRST, which will always report the value of the data in RAM before the write was committed

o WRITE_FIRST, which will provide the unpredictable behaviour you mention (and it's the default, to provide backwards compatibility)

o NO_CHANGE, which seems to disconnect the output RAM so the output value remains unchanged.

I doubt anything would help you if you simultaneously write into the same address on both ports, though :-)

Simon.

Reply to
Simon

These options are nice, especially Read_First (Read before write). They might even help relative timing between ports, provided both ports use the same clock. But we do not guarantee this, since you would then rely on excellent timing tracking between adjacent ports of the BlockRAM circuitry. Peter Alfke>

Reply to
Peter Alfke

This is a mis-conception many fall into. Those modes apply only to the port that is being written to. In other words, the output of the port being written has this known behavior you specify above but as for the other port, everything I said still applies regardless of which mode you have it in.

-- Brian

Reply to
Brian Philofsky

Then there is skew in the clock distribution network(s). (Ugh. My head hurts.)

--
The suespammers.org mail server is located in California.  So are all my
other mailboxes.  Please do not send unsolicited bulk e-mail or unsolicited
commercial e-mail to my suespammers.org address or any of my other addresses.
These are my opinions, not necessarily my employer's.  I hate spam.
Reply to
Hal Murray

A controller is required to provide arbitration as it is for any shared resource. In a synchronous design, both sides use the same clock, and all is well.

A multi-cycle can be eliminated with adequate pipelining.

Not if all inputs are registered and all processes are synchronous.

I have never had a problem with the standard synchronous templates. I would run a few gate sims if I were validating a new synthesis tool or inference template. I do check the utilization report and the edif schematic viewer after each synthesis.

Static timing would catch that.

No. Just reset and clk.

A bad wire will be picked up in the functional sim. The more you infer, the fewer wires you will have.

Certainly it is not hard to do, and it should be done at least once as a release check-off item. However, it should not be a part of the edit-sim-edit flow.

If a problem is found at the gate level, either a new design rule, or better enforcement is indicated.

Thanks for the posting.

-- Mike Treseler

Reply to
Mike Treseler

I'd like to thank everyone for their input on the subject. I also want=20 to make clear that the cores in my design that I've created myself are=20 thoroughly simulated and analyzed. However, the system includes some=20 cores from CoreGen as well (filters, CORDIC, DDS etc) and I find it hard =

to create a meaningful input to simulate the whole system, and that is=20 when I use ChipScope to watch the behavior of the different parts.

One question is still unanswered though, and I'd really appreciate some=20 input on this matter. How come that a system that synthesized perfectly=20 fine in ISE 6.1 also does that in ISE 6.2, but with a lower performance=20 (i.e. much more noise etc)? I have checkes all the coregen cores that I=20 utilize and they are all the same version. I have also checked my own=20 logic, and it seem to synthesize the same way, but still the result is=20 so much better using 6.1.

Johan

--=20

----------------------------------------------- Johan Bernsp=E5ng, snipped-for-privacy@xfoix.se Embedded systems designer Swedish Defence Research Agency

Please remove the x's in the email address if replying to me personally.

Reply to
Johan Bernspång

Chipscope is not a completely passive observer, but is placed and routed along with your design (as is signalprobe from brand A). So there could be different interactions with each placement. Does "more noise" mean digital interference with the analog front end? Or some difference in a DSP process?

Some DSP guys like to use matlab to modelsim interfaces for simulation. Maybe one of the X-men will comment.

-- Mike Treseler

Reply to
Mike Treseler

There are many possible reasons for this but rather than going there first, I would like to ask the question of whether timing constraints were provided to the design and if so, were they met? The reason for the question is the way the software is designed to work is to look at the timing constraints provided and attempt to meet them. If they can be easily met, many times the software will give you that result without trying to see exactly how fast the device can really go. This allows the tools to operate much faster and still deliver the results that were requested. If no timing constraints are provided and a low effort level is used (such as that is by default) the tools generally run very fast however do not produce the best possible result since it has nothing it is trying to strive for. If no timing constraints are provided and a high effort level is used, the tools will provide a better result however without constraints being provided, the tools guess at the tradeoffs to make for timing so it still may not give as good of a result as if true timing constraints are provided.

Now if you are providing timing constraints that were met with a previous version of software and are not met now, that situation can be more difficult to explain. In general that should not happen however we all know that it does from time to time. The best thing that you can do if you are in this situation is to contact either your FAE or the Xilinx hotline to have this investigated. There may be a simple explanation or there may be a complicated one but generally it is very dependent on the design that is being run, the device being targeted, how the tools are being run and possibly a list of other factors. Without this investigation it would be difficult to fathom a guess at the reason for that.

-- Brian

Reply to
Brian Philofsky

Hmm, well XAPP463 (Using BlockRAM in spartan3 FPGA's) seems to cope with both ports, or am I misunderstanding what you are saying ?

I'm looking at the table (table 8) on page 14, and it's laid out as:

Write mode Same Port Other port (dual port only, same addr)

------------------------------------------------------------------- WRITE_FIRST blah Invalidates data on DO,DOP READ_FIRST blah Data from RAM to DO,DOP NO_CHANGE blah Invalidates data on DO,DOP

I thought it was clear, but now I'm confused :-(

Reply to
Simon

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.