Updates

Not always : I've seen system failures triggered by simply faster pathways. There was Nothing Functionally Wrong in the code at all, but the way it interacted with the hardware changed.

Re tool version control : Given PCs are now so fast, we've tended to biff 'make', and simply rebuild everything from a larger batch. This reduces version creep exposure, and we archive the batch, and all called files, (includes the tools) - GUIs are not needed in this flow, so the archives are small and simple.

-jg

Reply to
Jim Granville
Loading thread data ...

the

ld

y

It's

n

Yup. That's what I do. There is a slight downside to this approach though. Speaking of which, does anyone know of a reliable source for

8", hard sectored floppy disk drives? ;)~
Reply to
Bob

t's not a direct answer to your question, I strongly believe that the

ould

any

or

=A0It's

in

t
8" double or single density? :D (not that the drive mattered, was just a controller thing of course). I never had them two-sided, though. Had (borrowed) two Bulgarian clones of some I believe Shugart drives (not sure really). They worked reasonably well - with my disk controller hard & firmware, used to run MDOS09... Single side double density was the maximum MDOS could think of (and had to be tricked into seeing the 256 byte sectors as 128 byte ones).

And - lo and behold - many of these disks are still alive and can run, MDOS09 being emulated in a DPS window on a power (PPC) machine (including my then written graphics editor which has made many PCBs, was usable years before anything of comparable usablity would become usable for the PC world). The 8" disks (which later became 5" and then 3.4", _partitioned_ to use all of the 1.44M, ROFL) are files now.

formatting link

Dimiter

------------------------------------------------------ Dimiter Popoff Transgalactic Instruments

formatting link

------------------------------------------------------

formatting link

Reply to
dp

This is a good question. One approach I've not seen mentioned yet is to do an automated diff of object code. If the object code is unchanged, it's assumed to be good. If not, an engineer can look at the listing file and check to see whether the change(s) are acceptable or not. If not, it usually doesn't take long to figure out whether the problem is in the compiler or the project code.

Often, the changes are a result of either compiler bug fixes or optimization enhancements. Either way, it's good to compare this with the change notes for the compiler to make sure you understand what was actually changed. Some compiler producers are not very good at documenting what exactly was changed or produce an incomplete list.

Ed

Reply to
Ed Beroset

You would need a lot of luck for this to be feasible, or a very smart diff. For instance, a few small changes in the compiler can produce different register allocations. If your diff can't recognize that a register has been changed from "r0" to "r1", but the code is otherwise identical, a lot of code may be flagged.

Reply to
Arlet Ottens

Yes, a smart diff is vital. As you point out, if the target processor is more RISC-like, register allocation may arbitrarily change. With more register-starved architectures such as the venerable 8051, it tends to be memory allocation, but the issue is similar, and so are the solutions. For instance, those kind of changes most often show up as isolated single-byte changes to the object code for a particular function or subroutine. A cheap but surprisingly effective way to weed those out is to simply look at the size of the resulting function, ignoring any alignment bytes that may be present. If it's the same size, it's most often just a reallocation difference. If it's NOT the same size, it's almost never just a reallocation, and is therefore worthy of a look to understand what's happening.

One way I've done this is to use Perl scripts to go through assembly language output listing files. Another way to do this is to use a script to disassemble object files. Which way is "better" will depend on a number of factors, including what kinds of tools you have available and the details of the particular architecture. In either case, and depending on architecture, you need to be able to intelligently ignore arbitrary differences in either register or memory allocation. As a side benefit, one gets a better understanding of what kind of code is likely to result from any given high level language construct, which can be very handy for either speed or size optimizations without having to dive into writing assembly language.

Ed

Reply to
Ed Beroset

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.