. That's true. But in assembly language, of course, you can just reserve
64 bits of space for numbers that may be 16, 32, or 64 bits long in your subroutine which may be called then to handle numbers of each size, after you set the mode properly in the calling routine.
Compilers don't generate such code because languages don't allow one to specify a need for it.
well, .NET can be made faster by employing a more involved translator. for example, an essentially braindeded direct-JIT, would be fairly slow.
if it reworks it, say, into SSA form and handles the stack and variables partly through register allocation, then, very likely, you will get faster results.
can't say exactly what the various .NET VMs do, as I haven't looked too much into this.
for example, my compiler uses a language I call RIL (or RPNIL), which is essentially a vaguely postscript like language. now, it is a more or less stack machine model. so, I compile C to this language, and this language to machine code.
and, the spiffy trick: the RIL stack, increasingly, has less and less to do with the true (x86) stack. part of the stack is literals (only in certain situations are they actually stored on the machine stack), part is registers (look like stack, but really they are registers), and soon, part may be in variables (needed, for technical reasons, to get my x86-64 target finished).
so, it looks like a stack machine, but a lot goes on that is not seen... I chose a stack machine model because it is easy to target (and was fammiliar), but this does not mean strictly that the internals, or output, have to work this way.
and, odd as it may sound, only a very small portion of the RIL operations actually generate CPU instructions, and typically then, it is for something that happened earlier in the codestream...
yes, a few possible reasons come up...
I don't understand exactly how this differs that much from current approaches. another CPU is, another CPU.
x86 and, soon x86-64, are the most common (at least for normal computers).
yes. for PCs, a standardized GUI would work. for non-PCs, it is probably a lot harder (given the wide variety of physical interfaces).
now, a very simple and general UI, makes a usability problem on a normal computer, as often these kind of UIs end up treating the keyboard like a paper weight (for many kinds of apps, the keyboard serves its purpose well).
now, in my case, I typically do my GUIs custom via OpenGL... this gives a good deal of portability at least (stuff still works on linux, to what extent it works...).
Any decent C compiler has a 64-bit integer type regardless of what CPU it's targeted at; it's a requirement of ISO C99. If targeted at a 32-bit (or 16- or 8-bit) CPU, the compiler emulates 64-bit operations. This is not novel; C89 compilers targeting 8- and 16-bit CPUs provided the same emulation for
32-bit integer types.
I'm not familiar with many other languages, but I believe the same is true. If you use a 64-bit integer type, the compiler or interpreter does whatever's needed to provide the illusion of a machine that natively supports such.
S
--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking
The cost of determining whether 64-bit emulation is needed on a 32-bit system and falling back to 32-bit operations when it's not is higher than the cost of just using it all the time.
If you're on a true 64-bit system, run the binary that was recompiled for that architecture. The source will be identical if it's properly written (well, except for ASM; I'm talking HLLs).
S
--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking
That logic is already there; the same opcodes are used for 16-, 32-, and
64-bit operations. The opcode for 8-bit operations is typically only different by one bit.
The problem, which you keep refusing to acknowledge, is that you can't load or store data of indeterminate size because the compiler (or assembly coder) needs to know how much space to reserve for objects at compile time. And, of course, loads are by far the biggest consumer of CPU time in a typical application -- some spend up to 80% of their time waiting on memory. Worrying about a few extra _nano_seconds to execute emulated 64-bit operations on a 32-bit machine is pointless when that same machine just waited tens of _micro_seconds for the data to show up. Any unpredictable branches, like doing run-time checks on data to see whether or not to use emulation, will stall the pipeline and cost up to tens of microseconds again to save a few nanoseconds of execution time.
If the CPU provides 64-bit operations, it's not emulation -- it's a 64-bit CPU. It may take longer to process 64-bit ops than 32-bit ops, but reality shows that's not the case in shipping CPUs. Either the CPU doesn't do
64-bit ops at all, or it does them just as fast as 32-bit ops.
It's obvious you don't even understand the "current stuff". Go read a book or ten, get some real world experience, and quit wasting others' time with your inane ideas.
S
--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking
Most programming languages have that concept, whether they're designed to make OO easy or not. Dynamic memory allocation has been old news for, what,
30 years now?
If you're using a constructor, you've already lost the performance war vs. a compiler's (and CPU's) built-in types, even ones that have to be emulated.
No, there's a lot of other things. For instance, your "flexible" objects that you proposed in your C++ implementation actually result in multiple static code paths that you select between using incredibly inefficient (for this purpose) virtual method calls and operator overloading. Simply emulating 64-bit operations on CPUs that don't have such is going to be faster.
Irrelevant. The actual math operations have the same opcodes for 16-, 32-, and 64-bit values, and they do the exact same things to all three types of data.
Sort of. The mode flags only really control load/store operations. But they're there.
Actually, people have given you exact references, but you know so little about the x86 and AMD64 architectures that you didn't understand or even recognize them.
S
--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking
Oh gee... lemme crank out some POC here... oh wait... that's right... it is not our job to spoon-feed you source code, it is your job to RTFM and hopefully learn from them.
for example, there is also a non-standard extension feature (I think maybe in gcc in 32-bit land as well, but not checked) known as '__int128', or a full on 128 bit integer type.
my compiler has a placeholder for this, but as of yet does not implement this type (can declare variables, but can't assign them or do arithmetic on them...).
Actually, only 2 and 3 are relevant here, because you first have to figure out if you need 32-bit or 64-bit data. If you don't know that, then your problem is either not well enough specified for you to start coding, or is so vague and general that you are better off using abstract types rather than fixed sizes (and probably better using a language such as Python, which has direct support for arbitrarily long integers, and can be combined with psyco to generate reasonably good machine code on the fly).
This solves your problems, since you are only ever passing 64-bit data.
Generating the different dlls (or other types of libraries or code) is easy - it's just a compiler flag to target x86 or amd64 code generation.
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.