This is not used for conversion but for normalisation. We normalise our input ranges because we want better, more precise measurement. For example, at 10mV/K, assuming 0V represents 0K, 5V would represent 500K. That's roughly 226C. If what we want to measure never reaches that temperature then we are wasting a lot of resolution on a range which will never be measured. Software cannot rectify this since the data is already sampled. How is the software supposed to know if 120K is really
120.2K or 119.9K if the hardware can only tell it 120K? Also, very few things which we usually want to measure reach down to zero K. Again we are wasting resolution by building hardware that can measure what will never be measured. So, people 'scale' the hardware to pick the appropriate range for example:-20C to 80C for room temperature
-40C to 10C for freezers
50C to 400C for ovensthen you scale again in software to convert to human readable form.