It is described using computing biz terminology and metrics, not science metrics.
That wouldn't be a diversion; it would address your mistaken assumptions about how the computing biz defines its nomenclature.
And bytes were never defined. They still aren't. A byte could be a single bit or an unknown set of bits. A byte is generally a copy of the settings of n contiguous bits.
It was base-10 when salemen were talking to potential customers who thought in base-10. It was base-8 when machine architectures used octal addressing. It was base-2 when digital architectures were used. It was base-16 if the addressing was IBM-based.
I don't believe that you did an original formatting of that disk. In my day, we called the first time for DECtapes certifying. I don't remember what the word was for virgin disk packs. This initialization takes up room.
You have been told that this is not SI.
You are given a maximum number. There is no way you will ever get more bits out that pack. That is all you are given. There were customers who spent enormous amounts of money and human wallclock time to squeeze out every useful bit they could when this media cost more than your house does.
Now count the f****ng bits. Bytes are not precise enough. Do you insist that the milk you buy is measured to the accuracy of a nanogallon?
Possibly. Have you also considered that you may also be confused about how your physical disk looks and works? Have you ever looked physically at those bits and how they are arranged across the platters?
That is your maximum.
I've been trying to tell you why but have not managed to be able to do the writeup well enough.
This is not a redefinition of a convention. The convention was always done this way. And I'm older than you are.
You are trying to do a crank job. I'm about done here.
/BAH