Hello,
In principle it should be possible to add, substract, multiple or divide a 1 billion digits with a 1 billion digits, even kids learn this in school with the standard math tables and math algorithms for addition, substraction, multiple and division.
These algorithms can be applied to arbitrary length numbers. (For positional number systems)
(I think these algorithms were invented thousands of years ago by the Greeks).
However x86 cpu's today can not do with kids, Greeks or common people can do and that is do math with arbitrary long numbers.
x86 cpu's only do math with a limited/fixed number of digits/bits... which is quite disappointing.
Why is this ? Sigh.
Some possibilities:
- Add new instructions which can multiple arbitary long numbers in memory.
(Example. multiply memory_pointer1, memory_length1, memory_pointer2, memory_length2, memory_pointer3, memory_length3) (something like that... could even be a 2 memory location operation instead of 3) (A * B = C)
or
- Add high level chips which can use the low level chips to do these more complex math algorithms...
The point is: this would prevent programmers from having to reinvent the old Greek math wheel, saving them time ;) and ofcourse giving them easy coding platform/more productivity :D
Just some thoughts.
Bye, Skybuck.