Flexible Instruction Set Encoding.

Which is completely and utterly useless for writing 64 bit applications.

Compatibility mode is for system software !

Couldn't care less about that.

I am an application developer !

I want fast 32 bit operations.

I want fast 64 bit emulated operations.

If only the processor had 64 bit emulation fastly implemented or any other form of fast performing scaled operations.

But nooooooooooooooooooooo.

All we programmers must write or own arbitary sofware arithmetic.

Which makes the processor a whole lot slower which might have been unnecessary if some decent scaling was added to processors.

Not to mention the horror of not having a generic integer type which can scale as required.

Now please follow your own advice and learn to read really carefully I might add.

Bye, Skybuck.

Reply to
Skybuck Flying
Loading thread data ...

No I didnt mean the compiler, I ment the computer/cpu.

Sorry about the confusing ! ;)

Bye, Skybuck.

Reply to
Skybuck Flying

I want to be as close as possible to the metal.

I am going in the exact opposite direction of .NET and Java.

The further the languages and bytes codes are abstracted from the hardware the slower it will get, because people forget reality, it has to be executed by some hardware nothing is for free.

Instead they should try to build a generic abstract scalable system directly in the hardware with future scaleability in mind and ofcourse super performance.

That's a lovety goal ! ;)

Bye, Skybuck.

Reply to
Skybuck Flying

Yes .net is as bad in 64bit as it is in 32bit. Since it isn't really CPU instructions it doesn't qualify. It is a very slow way to get any complex task done.

There was an effort a while back to make a CPU that spoke Java bytecode directly. I don't think they could get good speeds out of it.

A far better way to go would be to standarize on the instruction set of a real CPU. Ideally it should be a small and simple one so that anyone who wanted to could add a special section of control store to their processor to allow it to be run directly.

This would require that a standard way of doing the GUI also be created. It needs to be a very simple GUI so that even simple machines could do it.

Reply to
MooseFET

Why? If there is a possibility of needing 64-bit range, just compile for

64-bit and you don't have to worry about it. If simulation is needed because the CPU doesn't have 64-bit ALUs, the compiler will insert the appropriate instructions to simulate that.

No. Self-modifying code has horrible performance on modern machines, and you'd end up wasting memory because you'd have to space out all your variables as if they were 64-bit anyways. Simply running in 64-bit mode (even emulated) all the time will be faster on modern CPUs than trying to decide at runtime which is better.

If you're not in possession of the source, then tell the vendor you want a

64-bit version; either they comply or you find a better product.

Recompiling only needs to be done once, and it takes a trivial amount of time compared to the QA effort for any reasonably complex piece of software. Now consider the QA effort required for a program that dynamically modifies its code.

Why bother, when you can get 64-bit variants of mainstream OSes and 64-bit chips have been available for years? And, for that matter, 64-bit operations have long been available via compiler mechanisms for those sticking with 32-bit systems?

Again, you have started with a solution and are trying to find problems that match. That is almost always a waste of time and effort.

S
--
Stephen Sprunk         "God does not play dice."  --Albert Einstein
CCIE #3723         "God is an inveterate gambler, and He throws the
K5SSS        dice at every possible opportunity." --Stephen Hawking
Reply to
Stephen Sprunk

Just about the opposite is true. The big hint from Microsoft should have been why 64-bit drivers are needed in 64-bit Windows while 32-bit applications can still run in Compatibility Mode... it's called WoW64 to them.

How do they achieve such black magic without the genius implementation that you suggested? Well, there is a funny thing called binary/object formats and they contain information pertaining to the architecture that the binary/object is suppose to run/target.

You would if you'd bother reading and/or testing things yourself. Do you even have a 64-bit CPU and OS???

I shudder at the thought, go back to VB.Net.

Good, you have them in Protected, Long/Compatibility Mode.

No need, simply develop 64-bit apps for a 64-bit OS on a 64-bit architecture such as the x64 and you will be better off.

You really need to get HLL's and typedefs out of your head if you want to survive assembly language.

I have. I initiated 64-bit support for NASM, mainly from AMD's specifications. I have 64-bit XP and 64-bit Vista... 32-bit applications run just fine on both. Such capability is common knowledge for programmers and it was designed that way to avoid backwards-capability issues like the Read Mode -> Protected Mode transition brought about.

Beyond that, I've *actually* assembled/linked/tested my own assembly language programs in Long Mode, both 32-bit and 64-bit.

PS: Skybuck, ever since you have started posting here you have been coming off as a real dumb-ass. The usual type who ask questions twice before reading and/or testing things for themselves. I suggest you change your name from "Skybuck Flying" to a more appropriate one, like "Skybuck, take a Flying f*** at a rolling donut."

Reply to
SpooK

Think about it:

CPU's are known to be able to make decisions.

The best decision would be:

  1. if 32 bit required run the application sections in 32 bit mode at full speed.

  1. if 64 bit required run the application sections in :

2.1. 64 bit emulation for 32 bit systems.

or

2.2 real 64 bit for 64 bit systems if available.

Further requirement:

  1. Keep the source code as much the same as possible.

This is impossible with todays architecture.

Which means:

I must still re-write my software.

How much effort this is remains to be seen, it's definetly not FREE and it's definetly not that trivial.

All 32 bit integers have to be converted to 64 bit integers.

And then the software has to be RE-TESTED which requires lot s of time.

Also the compilers have to be modified and tested as well and might generate bugs.

Then software has to be redistributed etc.

Definetly not FREE !

Maybe in the future languages will have "generics" or something so that the same code can easily re-compile to different integers.

Without actually having to re-test all that much, otherwise it's useless.

I have to be absolutely certain that no matter what is generated for the generics it really works as it's supposed.

C++ Templates won't do... those generate C++ code... I like to be able to debug my code, and know for sure what code is written.

I don't even program in C++, it's just an example.

Sure some people will say it's possible to debug templates... how much effort does that cost huh ?!

All I am saying is:

Converting 32 bit software to 64 bit software definetly requires much effort ! More effort than should be really necessary !

Bye, Skybuck.

Reply to
Skybuck Flying

If you only followed my advice you might have learned something it's not that hard.

There are two 64 bit modes:

  1. The compability mode you mentioned, which I already explained to you is for system software and does not offer 64 bit applications.

  1. The real 64 bit mode, which is for 64 bit operating systems and 64 bit applications, even this mode has a default size of 32 bits and the rex prefix is needed which means 32 bit applications must be recompiled for 64 bit capabilities unless the 32 bit applications use 64 bit emulation then they can still run on what is called Windows on Windows, a microsoft invention for running 32 bit applications on a truely 64 bit operating system.

At least that's how I interpret the manual !

Bye, Skybuck.

Reply to
Skybuck Flying

Not much if the programmer knows the language he's working with. Recompilation will do the trick if the code doesn't assume that "int" (and such) are specific number of bits (which c/c++ doesn't guarantee in the first place).

If the conversion is a lot of work, then it's crappy code, or not intended to be forward-looking or portable. Think of Linux and *NIX in general for example, most software written for these just works in 32- and 64 bit systems with little and big endian architectures, among other things. It's a matter of knowing what the jack you are doing more than anything.

Reply to
aku ankka

Why not interpret the manual *before* posting all this bollocks?

Reply to
aku ankka

Once again, wrong. It is *required* by system software (kernel) to disable the CS.L bit in the CS selector to induce Compatibility Mode in such programs. This poses no issues as basic privilege/task- switching reloads segment registers and for a 64-bit Kernel, CS.L would be enabled. This is why you can have a 64-bit OS, drivers and apps running along side 32-bit applications with no issues what-so- ever.

The fundamentals of this interoperability have the basic relationship as Protected Mode has with Virtual 8086 mode. It's just that the Long Mode to Compatibility Mode relationship is much cleaner, faster and easier to implement.

Then you interpreted things incorrectly.

The REX prefix is only required for certain instructions and to access the extended registers when available. Your bigger worry should be the extra SIB byte that is needed for absolute addressing on instructions that default to RIP-relative addressing.

Windows on Windows is really nothing more than support libraries to map the Win32 subsystem libraries to 64-bit NT kernel calls.

To loosely state what has been previously said, you are trying to implement an inefficient solution when an efficient and working solution already exists... and is being used every single day.

Keep reading ;)

Reply to
SpooK

uint64_t calc(bool use64bit, uint64_t a, uint64_t b, uint64_t c) { if (use64bit) return a + b * c; else return (uint32_t)a + (uint32_t)b * (uint32_t)c; }

Still, why bother?

S
--
Stephen Sprunk         "God does not play dice."  --Albert Einstein
CCIE #3723         "God is an inveterate gambler, and He throws the
K5SSS        dice at every possible opportunity." --Stephen Hawking
Reply to
Stephen Sprunk

[...]

The code could possibly suffer a marked degradation of performance in that type of scenario.

Reply to
Chris Thomasson

Not all decisions are cheap, though.

Why bother? The cost of figuring out whether emulation is needed is greater than the cost of just using it without checking. Since using real 64-bit mode is as fast as, if not faster than, using 32-bit mode (even without emulation), there is no reason to bother. If your code might need 64-bit operations, you code it to use 64-bit data types. If the emulation is a significant performance hit, recompile for 64-bit systems. Provide both and let people use the best one their systems support.

It should cost you _zero_ to recompile for 64-bit if your code was written correctly in the first place.

Already done.

Distribute them together with the next software update.

That capability has existed since the earliest days of C.

Any time you change anything, you need to retest. That's why people put in automated test harnesse:, so they can recompile, set the tests running, and come back after lunch (or overnight, for large projects) to find out if they broke anything. That is a general best practice and has nothing to do with the 32/64-bit problem.

If it was written correctly, all it takes is recompiling. If your coders are incompetent, that's a much larger problem than 32/64 bit migration.

Yet again, you have failed to identify any actual real-world problem that this supposed solution solves better than what already exists. It's a solution in search of a problem.

S
--
Stephen Sprunk         "God does not play dice."  --Albert Einstein
CCIE #3723         "God is an inveterate gambler, and He throws the
K5SSS        dice at every possible opportunity." --Stephen Hawking
Reply to
Stephen Sprunk

You actually managed to get something correct. Congratulations; it's rare.

No, the REX prefix is necessary to access the additional registers; that's why it's named the Register EXtension prefix. The beauty of long mode is that the CPU works the same way whether you want 32- or 64-bit math, and they run at the same speed. Only loads and stores require any additional work (to mask off the top 32 bits), and you _have_ to recompile to change from 32- to 64-bit data types because the amount of storage (and thus pointer math) for an object will change. If you don't understand why, you need to go study assembly language more before suggesting complicated and unnecessary modifications to CPUs.

Actually, AMD invented it and everyone else copied. I trust their chip designers' judgement about how best to skin this cat more than someone who's shown they are incapable of even comprehending the existing manuals or twos-complement math.

S
--
Stephen Sprunk         "God does not play dice."  --Albert Einstein
CCIE #3723         "God is an inveterate gambler, and He throws the
K5SSS        dice at every possible opportunity." --Stephen Hawking
Reply to
Stephen Sprunk

You do realise that this sort of thing has been standard practice for over a decade? It's seldom worth the effort for 32-bit / 64-bit issues (it's not that often that using native 64-bit integer code makes a difference other than for large memory applications), but it can be very relevant for floating point code. Programs that need high speed floating point code have often had sections that have alternative implementations (such as using whatever SSE instructions the processor supports), chosen at run-time. A very simple method to handle this is to write your critical code in a library, and make different compilations (such as for a 32-bit target and a 64-bit target) and load the appropriate one at run-time.

Reply to
David Brown

This is, in fact, an old idea.

In my description of a computer architecture at

formatting link

several of the possible instruction modes work this way.

The FPP-12, an attachment for the PDP-8 computer, worked as a separate computer in its own right, and it used mode bits to determine if the same instructions acted on 24-bit integers or 36-bit floating-point numbers.

Usually, though, because "real programs" use constant values, or have other data type dependencies, this is used *not* as a way to allow the same routine to work with different data types (some _languages_ provide such facilities with source code) but merely as a way to make instructions more compact, so that a useful instruction repertoire can fit conveniently into the architecture's data word.

John Savard

Reply to
Quadibloc

Ah, but there are still mode bits so that what the instruction does

*without* the prefix may be 16, 32, or 64 bits depending on the mode.

John Savard

Reply to
Quadibloc

Now that's something different. I thought you were talking about how the program can call the same subroutine to work on 16 bits, 32 bits,

64 bits, as long as it *tells* the subroutine which one to use by setting mode bits.

John Savard

Reply to
Quadibloc

386 assembler can do 16 or 32 bits like that, and now 64 bits are included with x86-64.

But not C code, because the C language requires you to specify types; it does not assume that it is going to be compiled for a machine with mode bits.

Some languages let you specify "generic" routines, but that just means the compiler generates a version of the routine for every type required.

John Savard

Reply to
Quadibloc

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.