bit vs std_logic

Is there a good reason to use 'std_logic' by default rather than 'bit'?

I started off doing so, then thought it would be more thorough to define pure logic in terms of bits and save std_logic for stuff that would be anything other than 0 or 1.

I then found my compiler complaining about all sorts of stuff.

Problems trying to use a bit_vector as a counter, or using rising_edge() on a bit, and so on.

I'm inclined to just use std_logic as my fundamental signal.

Opinions?

Reply to
Kryten
Loading thread data ...

One very good reason is for IO compatibility with other IP. The standard is to use std_logic,std_ulogic, std_logic_vector and std_ulogic_vector for the types on all I/O. That way, your component can be used in a larger design regardless of what library that design uses, and same for instantiating components within your design.

Reply to
Ray Andraka

Thanks Ray, I thought it might be something like that.

I shall mod the code somebody gave me so that if I do get type conflicts I shall convert the bit signal to std_logic.

Reply to
Kryten

You should try to use std_ulogic and std_ulogic_vector for all signals that are not intended to be driven from multiple sources. Use std_logic and std_logic_vector only for signals that are going to be driven by multiple drivers (like shared data busses). That gives the compiler the info that it needs so that when you UNintentionally connect two drivers to the same signal it will get caught without having to simulate and debug to find that problem.

KJ

Reply to
KJ

Also, BIT is constrained to values 0 & 1. std_(u)logic also recognises 'X', 'Z' and more. These do have real-world meaning. The 'X' in particular, can reveal design faults such as not initialised & bus contention.

Reply to
David R Brooks

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.