Xilinx ISE ver 8.2.02i is optimizing away and removing "redundant" logic - help!

A number of things....things I mentioned in earlier posts and see below for a quick summary.

But you have no basis for that suspicion. It might be the case that the synthesis process has a bug but you need to prove it.....and then open a service request on the company that has the bug in it. My point is don't let your objectivity be clouded by what you suspect, debug and prove.

And this is where you start spinning your wheels (in my opinion). Instead of simply debugging the post-map sim to the source of the discrepancy you're trying things based on a suspicion that is not proven. Let's say for the sake of argument that your suspicion is wrong about the removal of redundant logic and that the problem is a timing issue with your testbench instead. That would mean that every minute you spend chasing 'keep' and 'saves' etc. was wasted time.

This will sound like a dumb question on my part but what is the distinction in your mind between 'redundant logic' and 'unused logic'? The reason for my confusion at this point would be because you say the 'redundant' stuff is getting removed and yet there is no 'unused' logic getting removed. If by 'redundant' you mean the classical Boolean Logic 101 definition where you add redundant logic to act as 'cover' terms in your Karnaugh map to avoid race conditions then that is the most likely cause of your problems. Is this the type of logic that you are trying to 'keep' but is being mapped away as an 'optomization'? If it is, then the rest of this post probably doesn't apply and we can discuss this point further, but if it is not then keep on reading.

One other source of 'optomization' is that an output of some entity is not really used. The logic for equation 'x' happens to reduce down to always being 'false'. This means that everything downstream of 'x' that depends on 'x' being true can never happen so it can be optomized away. It's not the fault of the optomizer removing redundant logic that's the way the original is coded. You probably already realize this but thought a quick 'Optomization 101' wouldn't hurt....but I also don't think focusing on what is being optomized is away is the way you need to go on this one (which is the reason on my first post I questioned you "Why...").

What you need to do is to simulate the post-map VHDL file and trace it back to why output signal 'x' at time t is set to '0' but when you use your original code it is '1'. Use the sim results from using your original code as your guide for what 'should' happen and the post-map VHDL simulation for what is actually happen and debug the problem.

It could be that

- There is some bug in the translation tool

- Could be some setting in your build process

- Could be timing related (i.e. your testbench is violating the setup/hold time requirements for the post-map model)

- Probably other things too

In any case, treat the fully post-map model as something to debug and find out the reason for the discrepancy and go from there.

KJ

Reply to
KJ
Loading thread data ...

I do have basis, as I wrote previously: using the "keep" statements removed some lines of removed "redundant" logic and dramatically improved success of the post-map simulation. I would indeed, however, like to avoid using this as a crutch and do it properly as you indicate. I would like to find out the correct way to indicate in the VHDL, by the way I write the VHDL, that the logic is not redundant.

I'm not really arguing for using those crutches; I'm seriously asking what to do so that I don't need them.

The "redundant" and "unused" logic terms I am copying from the mapper report and Xilinx documentation. The mapper report (see my first post) says "redundant" logic is being removed, not "unused logic".

means logic that is not connected to anything, so it can be removed (this latter is not what is happening to me). However, I haven't found anything in the manuals that explains what "redundant logic" is or how to write the code to avoid it. I have a lot of identical ROMs that I use to do parallel processing; those were being removed in the synthesis and translate step due to not having clocks on them. So my mind is pretty much a blank as to what is meant by "redundant" logic, other than the common meaning that it is repetitive -- but it isn't really, of course, because I'm using them simultaneously for different data.

I agree that finding out what is going on is the best approach. Do you have any debugging tips other than comparing the simulation results in detail and seeing what logic calculations must be getting removed?

Thank you very much for your input. I really appreciate the time you are spending to try to help me.

Best regards,

-James

Reply to
james7uw

Like I said, what you have to do is debug the 'optomized post-map' simulation model in the simulation environment to find out just exactly when it differs from the original code and then backtrack through the logic in the post-map design to find out why that is.

There really are no shortcuts to this process other than the things I mentioned in earlier posts (like maybe the testbench is 'violating' timing, use of things other than std_ulogic/std_logic, etc.).

A simple example of the 'redundant' logic that I was asking about is something that one might decide to put in to avoid race conditions is the following code which implements a transparent latch (By the way, do not implement this in real code in an FPGA). Q I have a lot

I don't doubt what you say but I also don't quite understand why ROMs would be 'removed' either. Maybe all you meant is that is that you couldn't find specific entities in the post-map VHDL that equated to the various 'ROMs' that you instantiated in the original code....but that's OK, a ROM is simply an array of constants, I would expect those to get rolled right into the logic. I can see where targetting a particular family might have to use logic blocks instead of embedded memory to implement what your code says (but could use embedded memory if you chose to implement a clocked ROM) but that doesn't mean that that the original unclocked ROM is not synthesizable at all.

'Redundant' in this context generally means that the fitter found that you have two equations that are logically equivalent. An example...

d must be getting removed?

None, other the ones listed below and in previous posts. Tweaking the 'no optomize' switches won't get you to the bottom of what ails your sim. It might just postpone the inevitable when you might find that your design doesn't work on real hardware.

If the problem is actually in your testbench in how you generate inputs to your design (i.e. meeting the timing requirements of the post-map design) then this should be relatively straightforward to fix. In fact, this is a fairly common reason for why 'post' does not match 'pre' simulation results.

If nothing else it is probably much quicker to verify testbench timing than to debug back through the post-map design....but that just means you should look at that first. If that's not the problem then you need to debug.

Good luck, not sure I'm helping much.

KJ

Reply to
KJ

James,

Maybe there are switches to the synthesizer that would allow turning off the optimization?

I would tend to agree that looking for bugs in the toolchain might not be the best way to work through this.

I haven't been following this thread all along, but one thing occurs to me. I'm new to VHDL and have settled in to an approach where I make little incremental changes, then immediately test and verify something didn't break. That way I can go back and the source of the problem is obvious, because there is only a little bit of code to examine.

In your case it's like maybe the sequence is working, change code working, change code working, change code working, change code broken, change code

Reply to
David Ashley

Yes, I'm familiar with Karnaugh maps and I understand the point. Remember, I am past synthesis and my problem is in the mapper, going from .NGD (Native Generic Database) to .NCD (Native Circuit Description) files. Does this redundant logic removal process you just described happen at this stage? Remember, the "Redundant" terminology is Xilinx's, not mine, and it is being invoked by the mapper. I am just wondering what Xilinx means by "Redundant Blocks" (sic) of logic. This terminology can be seen in the section from the mapper that I included with my first post.

The explanation I received was that without a clock, they were being interpreted as asynchronous RAMs and were optimized away. Further explanation was not given to me. That was happening at the Translate step, which was the previous step to the mapping step, and is fixed.

Reply to
james7uw

Yes, but absolutely nothing is for turning off optimization of "Redundant Blocks" (sic)* of logic; everything is for turning off removal of "Unused" logic. The mapper -u option, the "keep" constraint and the "save" constraint, are all for preventing removal of "Unused" logic, not "Redundant Blocks" (sic)* of logic. It's enough to make me tear my hair out. Anyway, as you can read from the other posts, doing that is a kludge and at best a debugging step to identify the problem area, not the real way I want to solve the problem.

*See mapper report in my first post in this thread.

I think I may very well have to try that, building up my project piece by piece.

Not at all. I'm grateful for your input.

Best regards,

-James

Reply to
james7uw

Well whatever is 'optomizing' them away has a bug in it then if the output is now 'different' because of that optomization. Like I said, an asynch ROM is simply a table of constants. Synthesis tools are very good at optomizing constants (as they should be). It wouldn't surprise me at all that...

- You wouldn't be able to 'find' the ROM after mapping to a particular part because the result of those constants has been integrated into whatever downstream logic that the ROM was feeding.

- That the implementation might (probably) use more logic resources and none of the internal memory if the targetted part requires a clock in order to be able to map it into one of those internal memories.

In any case, the overall function has not changed it should simulate the same. If not, then a simple test case and a service request to Xilinx might be in order.

Not sure I would call it 'fixed' (unless what was 'broken' was just the ability to use internal memory which as mentioned above is not really a functional issue but one of trying to properly use internal resources to implement a given function). Any way, moving on.

Reply to
KJ

One tip: instantiate both behavioural and post-map modules in your testbench, and run them in parallel. You can assert on differences in the outputs, and trace internal signals in the wave window (to the extent that you can still recognize internal signals).

Possibly also set breakpoints on differences in internal signals which ought to be the same.

- Brian

Reply to
Brian Drummond

Handy link for this entire thread:

formatting link

Thank you for your advice! I'll try to keep this thread posted if and when I find answers.

Best regards,

-James

Reply to
james7uw

Handy link for this entire thread:

formatting link

Xilinx tech. support said to separately register each level of logic, since I have some lines of up to four xor statements being assigned to a signal. I tried that, but it didn't help. However, the sub-module in which the mapper is connecting two of my output registers together, works on its own in in a separate project in post-map simulation when those output registers are treated as port signals. It works on its on without or without the added registers that do one xor at a time, but still cross-connects with or without the added registers when used as a submodule of user_logic.

Clearly I am dealing with undocumented features of the mapper; certain coding techniques are required in order for it to accomplish my intent. Xilinx really should be documenting these requirements; it's not fair to tell people that "the problem is with the way you write your VHDL" otherwise. Documentation for synthesis and translate is much better.

-James

Reply to
james7uw

Handy link for this entire thread:

formatting link

I tried adding a separate level of registering in my main line VHDL code and was trying to test it when the ModelSim simulator died. No clock; therefore no signal processing. The transcript (output window) looks normal and ends up with:

# ** Failure: Simulation successful (not a failure). No problems detected. # Time: 1320 ns Iteration: 0 Process: /user_logic_tb/line__94 File: user_logic_tb.vhw # Break at user_logic_tb.vhw line 273 # Simulation Breakpoint: Break at user_logic_tb.vhw line 273 # MACRO ./user_logic_tb.fdo PAUSED at line 16

Both post-Map and behavioral simulation show no clock and no signal processing; all flat lines all of a sudden. I'm looking at reinstalling. I'm using the ModelSim XE III 6.1e starter edition. Does anyone know how to fix this without reinstalling?

Also in regards to my previous message: Xilinx tech. support said to separately register each level of logic, since I have some lines of up to four xor statements being assigned to a signal. I tried that, but it didn't help. ...but still cross-connects with or without the added registers when used as a submodule of user_logic.

Would anyone have some suggestions about how to write the VHDL so it won't do that?

Thanks in advance,

-James

Reply to
james7uw

formatting link

While it's not impossible that your Modelsim install got corrupted, I highly doubt it and therefore suggest that reinstalling is likely going to be wasted time. I've yet to 'fix' anything by re-installing Modelsim. I'd suggest debugging as to why the clock signal is not running any more.

It didn't work because that was just a random guess on Xilinx tech supports part to try to close the service request. Since the problem of why pre and post VHDL models are acting differently has absolutely nothing to do with your source code is there suggestion has 0% chance of solving the problem....which you confirmed.

I don't know how you actually posed the question to Xilinx but the question that should have been posed to them is along the lines of: "I have a pre-route VHDL simulation design file and a post-route/map/whatever VHDL simulation file that is the output of ISE version X.X. Given the same input, they don't simulate the same. Signal 'X' at time 't' is a '1' using the original design file and it is '0' using the VHDL output from ISE. I've attached the original source VHDL files, the ISE project files which includes the post-map VHDL as well as the testbench VHDL that generates the stimulus and a Modelsim '.do' file which runs each design up until time 't' where you can see that the signal 'X' coming out is different between the two models. I've confirmed that my testbench generates input stimulus to both designs that meets the input setup/hold time requirements of the final routed design. My question is 'Why are the outputs different?"

Is that anything close to how you worded it?

When posed in that manner, any answer/suggestion from Xilinx that does not address the question of "Why are the outputs from the two simulations different?" is irrelevant. Letting them off the hook with the suggestion of changing your source code to add registers because you have "some lines of up to four xor statements being assigned to a signal" (whatever that really means) is just trying to make you go away without addressing your real problem....but if your service request did not ask that basic question and provide them with the two simulation models that demonstrate this difference in the first place, well, they can only deal with what you provide them.

Yes...as pointed out earlier in the thread...

  1. Write (if you don't have one already) a testbench that instantiates the original design file. Make sure all input setup and hold times in the testbench meet the timing requirements listed by ISE in the timing analysis report.
  2. Run the testbench with both the original design file and the post-map file and document where the two predict different results.
3a. Open a service request to Xilinx sending them this information and ask the question as I mentioned in the previous paragraph. 3b. Debug into the post-map design file and see if you can determine the cause for the difference while Xilinx is also chewing on it.

Always keep in mind that the 'pre' and 'post' simulation models are ALWAYS supposed to produce the same result given the same stimulus that meets all input timing requirements and that this is ALWAYS TRUE NO MATTER WHAT the original source code is. When this is not the case (and it does happen), as I mentioned earlier in this thread the root cause of the discrepancy is generally...

  1. Testbench not meeting input setup/hold time requirements (i.e. you need to fix your testbench).
  2. Improper use of types other than 'std_logic/std_ulogic'. (From earlier in the thread I thought you said there were none in your code. But if there were than again, this would be yours to fix).
  3. Bug in the tool, in this case ISE. In this case, you need to open a service request and have them explain to you why 'pre' and 'post' simulation is producing different results....and not let them off with anything that causes you to change your code except to fix something along the lines of #1 or #2 that you missed. Changing the design because "some lines of up to four xor statements being assigned to a signal" is not an acceptable reason...see the previous paragraph with the 'ALWAYS' in it for justification.

KJ

Reply to
KJ

Handy link for this entire thread:

formatting link

Hi KJ,

Thanks for your information and moral support.

About ModelSim not seeming to work, I think I was just tired. I wasn't expanding the waveform window pane to look at the whole view. Without doing that, I was just looking at the initial "offset" time of 100 ns in which nothing was happening. It's also possible I was tired and not remembering that I have to wait for the simulation to complete; I recall seeing the final output was 'U' - Uninitialized, and that may be the reason. Next time I should record screenshots so that I can prove I was not hallucinating, or, alternatively, while I'm at it, give me time to realize what is going on.

"input setup/hold time" is the time required before a clock edge to setup and hold an input signal so that the receiving FF will successfully register it, is that right?

I am using the "Test Bench WaveForm" GUI feature (Xilinx ISE ver

8.2.03i, now) and in the clock timing input window that comes up automatically when I create a tbw file, there are settings for "Input setup time" and "output valid delay", that I have to fill in. "Input setup time" is the time duration that the testbench will place my input signal transitions before its clock edges, is that right? For example, I have a "load" pulse that I draw in the GUI and the testbench will make sure to activate its edges 1ns before the matching clock edges if I set "input setup time" to 1ns, is that right? Does "output valid delay" mean the time duration after a clock edge at which output data becomes valid? If so, I don't understand what that tells the testbench to do. Since that seems to be dependent upon the device under test and yet it is a testbench entry parameter, I am confused as to the meaning of that and must be understanding it wrong.

So I have to make sure that my "Input setup time" that I specify has got to satisfy all input setup/hold times required by the mapper timing report, is that right? What do I look at to specify the "output valid delay" (it must be some kind of data in the timing report. I'll take a look)? Does that mean I'm telling the testbench not to "look" at data before that time period after a clock pulse just for reporting and display purposes?

I am synthesizing and translating and getting correct operation in simulation but then mapping and getting incorrect operation -- I have an initial load into a submodule of 128 bits and a simple subsequent output from the submodule of that separated into its four 32-bit words. In that, post-map, the lower three bytes of the second register are being duplicated into the third register, both in the signals used in the calling module and in the signals used in the submodule. It looks like logic is getting optimized away incorrectly, but could that really be happening by violating setup and hold times in the testbench?

I am gett----------------------------- I looked at your code and I have some suggestions for you. You have registered your design, but you haven't pipelined the design which will more or less fix the issue you are having. What you did was place a register on the output, but the output is not being optimized, the combinational logic in-between is. This is what needs to be registered. In one example of your code you have 4 logic levels.

w2(31 downto 24)

Reply to
james7uw

Handy link for this entire thread:

formatting link

Here are the ISE project settings that I changed from the defaults:

Synthesize properties: "Keep Hierarchy" on. (Left "Equivalent Register Removal" on). Translate properties: "Preserve Hierarchy on Sub Module" on. Mapper: "Trim unconnected signals" off (left "Allow logic opt. across hierarchy" off). Opt. strategy: speed Generate detailed Map report By default, equivalent register removal is on, so I had to turn on global optimization mode (-global_opt on) in order to turn it off. Trim unconnected signals (-u) - I had to turn it on, according to the mapper output, as long as I use global optimization mode. (-u is known as "(Do Not Remove Unused Logic)" on pg 141 of dev.pdf ver 8.2i.)

-r map option (no register ordering)

Reply to
james7uw

Handy link for this entire thread:

formatting link

Here are the definitions:

C:\Xilinx\doc\usenglish\help\iseguide\mergedProjects\xsim\html\xs_hidd_initialize_timing_dialog.htm

The Input Setup Time is the minimum amount of time between the arrival of an input pulse and a clock pulse for that input pulse to be considered valid.

The Output Valid Delay is the maximum amount of time allowed for an output to change states for it to be considered valid when used in a self-checking test bench. The Test Bench Waveform Editor can write out a self-checking test bench. For more information see Generating Expected Simulation Results. Time units are determined by the Time Scale drop-down menu at the lower right.

Reply to
james7uw

snipped-for-privacy@yahoo.ca wrote: Handy link for this entire thread:

formatting link

Mapper warnings are: WARNING:LIT:243 - Logical network u0/N0 has no load. WARNING:LIT:243 - Logical network u0/N1 has no load. WARNING:LIT:243 - Logical network u0/r0/N01 has no load. WARNING:LIT:243 - Logical network u0/r0/N11 has no load. WARNING:LIT:243 - Logical network Bus2IP_Clk_BUFGP/N2 has no load. WARNING:LIT:243 - Logical network Bus2IP_Clk_BUFGP/N3 has no load.

Can't find these in the technology schematic (post-translate). There is a u0/r0/N0 and u0/r0/N1 and a u0/r0/N21 and u0/r0/N31 in the Technology schematic. Many u0/Ns followed by three digit numbers. Nothing named "N" anything in the RTL schematic. Saw Bus2IP_Clk_BUFGP, but no /N2 or /N3 (nothing after Bus2IP_Clk_BUFGP). I could try a new project before doing map and see if map is going back and removing those.

Did that. Made a new project and did only up to synthesize and translate. There is N0 and N1 in the top level in the Technology schematic. There is a u0/r0/N0 and u0/r0/N1 and u0/r0/N31. Saw Bus2IP_Clk_BUFGP, but no /N2 or /N3 (nothing after Bus2IP_Clk_BUFGP). Plenty of N's at the top level. Map *does* seem to be going back and adding to or changing the Technology schematic, but not taking away the above, which never seemed to be there. Not good.

Reply to
james7uw

formatting link

Yes, almost. Setup time is the time for the signal to be stable prior to the clock. Hold time is the time for the signal to be stable after the clock (many times, but not always hold time is 0).

Yes

Yes

Sort of. What you're telling the testbench is how long it is before the outputs of your design will become valid so that it doesn't bother to check them prior to that time. You're right that this is dependent on the device under test but so is the input setup times that you're entering. Generally what you would put into the testbench for these are numbers that you 'can live with' in your actual design. By that I mean, the FPGA doesn't usually live in isolation it is connected to outside devices that may have timing requirements as well.

For example, if one of the inputs to the FPGA is connected to some device that has Tco = 10 ns and that part and the FPGA both transmit/receive this signal with the same 10MHz clock (i.e. T=100 ns). Then as a first order approximation one could calculate the input setup time requirement at the FPGA as being T-Tco = 90 ns. A more accurate approximation would be to realize that there is going to be clock skew between the two devices and delay on the printed circuit board that will eat into that timing margin and you should take those into consideration as you define what the FPGA input setup requirements are. The point is you probably wouldn't want to put the

90ns in as the FPGA's requirement, get some estimates for those other two or swag them as being less than 5 ns or so and enter 85 ns. For a really high speed design you'll be more careful about determining the things that go into the timing budget for the simple reason that there is less time per clock cycle and you can't afford to not have better control over everything.

Anyway, whatever that input setup time requirement is that you determine should go two places: as a constraint to the fitter that it needs to meet and into your testbench. Also note that the requirement is determined without regard to any timing numbers from the FPGA itself. It is a requirement that is determined by the outside world around the FPGA. Most people do go through some form of this figuring out and enter the number into the fitter so that when the timing analysis is performed it clearly flags violations. What most do not do is to enter that exact same requirement into the testbench.

Repeat this process for computing what the clock to output delay requirement of the FPGA is. Here you would start with your clock period and subtract off the input setup time of whatever device is on the receiving end of the FPGA output, subtract off clock skew, subtract off PCB delay. The basic process is the same.

See above for the more detailed answer.

Don't know. Just as with real life, when you violate timing requirements pretty much anything might happen. It would depend totally on the actual implementation. If that's what it's doing, then that's what it's doing.

The simulator calls it a warning but it really is a design error. What exactly the simulation model does when this occurs is a function of how the simulation model is coded. Best to investigate why the logic path is long and clean up those problems now. If all of the timing violations are setup times, you can temporarily code around this simply by slowing down your clock for simulation. Get the testbench to the point that it can run the original source and the post-map model with no reported timing errors.

The more thorough check though is to compare the timing requirements that you have in your testbench with what pops out of the timing analysis report. The reason it is more thorough is that your simulation might not happen to hit every input under just the right conditions. Bottom line is that if you've determined that the input setup requirement is 15 ns then the timing report had better report something less than 15 ns in the actual implementation or you have a design issue to resolve. Timing analysis does not require any simulation. Do this for all inputs and outputs.

The problem as I see it is that the original source and the post-map simulation models don't agree. I don't see anything in the response from Xilinx to address this but maybe you didn't word it in exactly that manner. Once you get your timing errors cleaned up you should have a testbench that will provide the identical stimulus to both models. If they still perform differently then open another service request to Xilinx (or your FAE) and have them explain why the two are different. It has absolutely nothing to do with optomizations. Original source and post-anything must be functionally identical...regardless of what the function actually is. Stick to your guns on that and don't let them off the hook but also don't sidetrack them with optomization settings or any of that. It is the software's job to translate your code into something functionally identical that they can actually implement inside a real device.

Just make sure you've cleaned up all timing problems before since that is on your side, not theirs.

Other than things I've mentioned earlier in the thread about incorrectly using things other than std_logic/std_ulogic or latches or things like that it never is coding 'style' per se. So do the timing analysis, verify that the testbench presents inputs and checks outputs at the appropriate time per your requirements then re-collect the screen shots and send it off and ask them to explain the difference since they are supposed to be functionally identical.

Good, one less thing to worry about right now. The reason std_logic is such a good thing is simply because of the value 'X'. Being able to get rid of all the unknowns in a simulation is a milestone of sorts and when you can't for whatever reason then that big 'X' is staring at you pointing you right to the place to investigate. Anyway, you don't have that but just thought I'd toss out why using only std_logic is a 'good' thing until you've been doing this for a while and can move on with confidence to other types.

Good plan.

It shouldn't be a problem and if it is it's Xilinx's problem anyway. Remember your original source code is not the actual implementation it is an abstraction. FPGAs do not implement 'a xor b' they translate that into a lookup table that has been programmed appropriately and enable pass transistors to get signals 'a' and 'b' to the proper inputs to that table. Sometimes warnings can point to problems but in this case I don't think it will but like the FAE said, it wouldn't hurt to look either. I'm not exactly sure at what file you need to look at though. Maybe try searching through all of the files for the net name that is being flagged and see what hits.

KJ

Reply to
KJ

Link to entire thread

formatting link

Thanks for that, KJ. Meditating on timing issues, and looking at the mapper timing report resulted in me setting the input setup times and output valid delays in the testbench to exceed the maximum figures of "Setup to clk (edge)" and "clk (edge) to pad" that were reported. I took the additional conservative step of halving the clock frequency. Now my post-map sim. is giving me correct results. It is odd that improper timing would cause byte mixup like that, but I can certainly contemplate types of interconnections that might behave that way. The only other thing that I think helped was making my design synchronous by registering all data in clocked registers. One thing I clued into is that you can't, at least in Xilinx, set a signal in two or more different processes, because that results in multiple sourcing and unknown ('X') output. Also, if something like the following is written, which is the correct way to write a register:

MY_PROC : process (clk, rst) is begin if (rst = '0') then a

Reply to
james7uw

That's always a good idea.

You can't do that physically on-chip for most (all?) "modern" FPGAs. If two drivers drives different values onto one wire, they will conflict. 'X' is the simulator's way of telling you that.

However, if all but one of the processes is driving a 'Z', most synthesisers will create you a bunch of multiplexers such that the one process that is driving a non-Z value at any time will "win" and the signal will then take on that value within the FPGA, just like in simulation.

I usually try and avoid doing this as you can end up with lots of extra logic that you didn't expect, and debugging what's going on on chip if more than one process does drive a non-Z value can be a bit of a pain!

I'm not sure what you mean by this, if you ask for other signals to be connected to the rst input of the flipflop, then the tools will surely do as you ask. The fact that this reset is asynchronous may then cause you grief. If you want to reset a register during "runtime" rather than just as an initalisation stage, you're much better off using a synchronous reset:

MY_PROC : process (clk, arst) is begin if (arst = '0') then -- async reset a

Reply to
Martin Thompson

formatting link

Glad to hear that it's working now.

As with timing problems on a real board, the results you see with a timing problem using post-tool models are usually 'odd'. It's completely deterministic (unlike a real board) but it seems odd because of the mapping from your source into an actual implementation. That mapping, while producing something that is functionally identical, is generally not what you would expect. But it works and that mapping is what the tool is supposed to be good at doing so don't lose sleep over it.

You can't in any logic (not just Xilinx) have more than one process driving a signal. Just like on a board two outputs driving the same signal which can't be done unless the design is such that all but one process is setting the signal to 'Z'. Obviously this can be useful for data busses (which would explicitly set the output to 'Z' except when being read from) but for most other signals this is not the case.

One trick to catching this bug earlier (instead of having to debug to find that the reason for the 'X' is two processes driving) is to use the type std_ulogic (and std_ulogic_vector) instead of std_logic and std_ulogic for all signals except for those that truly do require multiple drivers. What you'll find if you make this conversion is that the compiler will flag this as an error for you right up front even before you get into simulation (assuming that these two processes are in the same entity and physically in the same file). If the two processes are in totally different entites and are in separate source files the compiler won't complain when compiling either file but the moment you invoke the simulator it will complain about net 'xyz' being driven in more than one place and will generally point you to the two places....one of which must be wrong. Much easier to fix that way then having to debug down to why a signal is 'X'. Try it out.

Most people (myself included) grew up using std_logic because that is what was taught and the switch to something else on something so basic can be difficult to 'unlearn' but it is worth it. It's also not a big leap. The type std_logic is actually derived from std_ulogic, they have all the same values and everything. The only difference is that a std_logic signal is allowed to have multiple drivers, std_ulogic is not (which is why the compiler can flag these as errors). The other way to catch the problem is to allow the synthesis tool to run all the way through. At 'some' point every tool is going to complain about two drivers on a net where the drivers are not tri-stated.

You have to be very careful about using async resets inside an FPGA. I'm assuming that what you meant by additional signals is something along the lines of

MY_PROC : process (clk, rst, xx, yy, zz) is begin if ((rst or xx or yy or zz) = '0') then a

Reply to
KJ

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.