You people doing audio work with cutting-edge chips: what nominal signal voltages are you using?
It seems like most new chips I see these days are aimed at the supply rails used for high-speed digital circuitry, 3.3V being the most common, 1.8V not unusual. This includes chips aimed at audio, including pro audio (e.g., high-quality ADCs).
In professional audio gear interconnects, 0VU = +4dBu (1.23Vrms) is the standard. Back in the day, using chips that took +/-15V rails and got within two diode drops of the rails, you got about 18dB of headroom. For consumer gear, 0VU = -10dBV (316mVrms) is also a common standard. To get
18dB of headroom over that, you'd need supply rails of +/-3.5V, assuming rail-to-rail ins and outs.So what do you folks do with these 3.3V chips (that's not +/-3.3V, it's a single supply)? That gives you 11dB headroom over a -10dBV 0VU signal; is that what everyone settles for now? Or is there some other emerging standard, and if so, what?
Thanks!