Our (computer) systems are engineered not to lose bits. While it can be possible, it's very unlikely.
Not exactly, they're engineered to transfer bits and have all sorts of error correction employed in many, but not all applications to ensure that schedulable actions (e.g. a file copy) can be verified correct. Some applications have parity checking in hardware. Streaming realtime audio isn't an application with either hardware or software checks to speak of. Granted, the number of lost (mostly flipped) bits are small, but we're talking of an application that's a bit 'Nth degree'. It's a decently noisy environment too - the audible difference in removing radios from my chassis was, frankly, night and day.
Very good ! I'm not impartial either. There are methods we can use, to validate what we hear. Having a small set of reference tracks is pretty critical. While I wouldnt want to discourage you from trying to validate with measurements, half the challenge is the range of things to be measured isnt insignificant.
At the end of the day, the most important attribute any of us can have, is to learn to trust our ears. I've found that bias does tend to wear off over time.
I'm relatively sure that any acoustic method I have of measuring this will be pointless, I'll be chasing differences within my devices' noise floors. Not ideal.
I've got a few tracks and the like, I'm acutely aware of my will to want to find truths where they may not exist. Lots of blind testing with friends and partner. You're right, one becomes more discerning, I'm actually happy when I think something's worse and others confirm it impartially (something of a head check).
I dont think it is true. There's all kinds of distortions that we can hear, that doesnt necessarily translate into FR measures. 2 x systems can both measure flat, but sound very different due to various distortions.
I agree with you, though the guy in question was emphatic in his beliefs! I'm with you, not everything that's an audible limitation can be captured as aberration in frequency response.
Room correction certainly can be very obvious , and is worth pursuing further for digital systems.
I'd not build a system without it in future. Currently challenged with getting it working realtime with manageable, predictable, low system latencies.
I might have missed it, but with your room corrected 16 bit files, how do you ensure enough headroom so you dont get clips? Reducing volume too much loses resolution, is this a trade off? DRC gives more benefit, than the lost resolution?
From a psychoacoustic perspective, time alignment of various components of a multi-channel signal (e.g. stereo) is far more important than small differences in relative energy level, not least as sufficient differences in time alignment can give a perceived loss of frequency response. In tuning mine this became an interesting and salient variable; not unlike when subs and speakers reach phase at key frequencies. Some of us, and I guess I fall into this camp, aren't blessed with system locations that are as acoustically favourable as those of others.
I understand what you mean by clipping, insofar as the inverse of the amplitude response needs to be applied to capture the intended energy content per a given frequency; if the amplitude response is way off, the correction is likely to be extreme. So far nothing obviously audible; typical corrections are sub-hundreds-of-Hz and the most obvious corrections are against excessive gain - so the effect tends to be quite pleasant. The convolution engine I use is set to clip in worst-case scenarios. Presently I've not managed to adversely clip anything to the best of my knowledge (we'd be talking -ve amplitude response for a signal already near max amplitude). I'm able to have it throw an error if I feed it an incorrect file format and it reads zero response at a given frequency (thus an infinite correction), and frankly the filter I'm employing isn't extreme (one could be stricter with it and probably run into the issues you're suggesting with aplomb).