The premise is fatally flawed.
Bytes are neither big-endian nor little-endian ... that is human prejudice and the accepted human practice of writing the most significant digit first. There is nothing in a CPU that says you must write 1010 binary for 10 decimal (rather than 0101).
In a lot of applications, little-endian storage makes sense. For example, in the GMP, it makes the most sense to store large integers least-significant limb first (there are algorithmic advantages).
It also makes more logical sense. For bytes, b[n] is the bit corresponding to 2^n. For bytes, it would also make the most sense if B[N] is the byte corresponding to 2^(8*N).