Multiplication as thought to kids in school is based on the "math table" , "lookup table", "and table" concept.
If one understand the full algorithm one can do arbitrary/infinite precision math.
Let's discuss a little bit how a modern cpu could achieve this.
Since a modern cpu does need to do some multi tasking.
Maybe the cpu could do it in little pieces and safe it's memory state during context switches, or maybe a special coprocessors is preferred... ?
I also wonder how much faster or slower the hardware implementation would be compared to the same optimal software implementation.
Bye, Skybuck.