Updates

It's a well known problem when the old proven code quits working properly after recent upgrade of a tool chain. For a project of more then few hundred lines, it is not feasible to re-test each and every unit. Just wondering what is your approach on this practical issue.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky
Loading thread data ...

While it's not a direct answer to your question, I strongly believe that the tool chain that built a valuable revenue generating software product should go under some quality control umbrella as a quality resource just like any physical tool. In other words, after release, lock up the computer for future changes but never upgrade it and completely avoid the problem. It's a small price to pay for being able to relax retesting after rebuilding in the future.

That's my preference but I have yet to be associated with a company that does it that way.

JJS

It's a well known problem when the old proven code quits working properly after recent upgrade of a tool chain. For a project of more then few hundred lines, it is not feasible to re-test each and every unit. Just wondering what is your approach on this practical issue.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
John Speth

Treat it just like any other bug in the code.

Reply to
Arlet Ottens

(JJS - please post using correct quotation formatting for a newsgroup - it makes it much easier for the others to follow.)

That /is/ a direct answer to the question. The exact toolchain used for code generation - compiler, library, etc., is part of the makeup of the project, and should be treated with the same respect as the source code. You can change things like the debugger, IDE, editors, etc., if you want.

Sometimes you want to update the compiler and library used in a project

- perhaps to fix some bugs, or because you later need faster code generation. But you treat it like any other major code change in terms of testing and qualifying.

This is why toolchain installers that update existing versions, put themselves on your path, change environment variables or registry entries, etc., are fundamentally broken.

mvh.,

David

Reply to
David Brown

The project could move on for many years. Sticking with outdated and unsupported toolchain could not be a good idea.

I've seen companies which lock the state of all tools at the beginning of the project; however it doesn't seem to help them much.

Besides, if the code immediately breaks when upgrading the toolset, that usually indicates trivial application bug like a runaway pointer or a memory leak.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

Hi Vladimir,

Well, don't upgrade the toolchain for proven code! :)

I check in the entire toolchain as part of the project. So each project has a "toolchain" folder within it. I find there is rarely a need to update the toolchain for a working project, but if I ever do, it rolls backwards and forward as e.g. old branches are checked out. So any given release can be rebuilt using the toolchain as it was when it was released.

git is quite good for this, very fast and compresses its repository so it does not take up too much space.

You can clone the git repo for the project on a remote machine (say a laptop in the field) and have it work, make changes, and merge it back again later.

--

John Devereux
Reply to
John Devereux

Whenever possible, that's obviously the correct solution. Alas, practical issue will sometimes forbid following that path.

There are at least two potentially serious drawback with that approach:

1) Size. Sooner or later you'll find yourself with 50+ sandboxes, each containing an identical copy of the same compiler. The installation size of some of those compilers compared to that of laptop harddrives will then become a problem. 2) Dongle-itis. If your compiler vendor follows the church of "never trust your customer", odds are the compiler won't work if copied like that, rather than "properly" installed. This is particularly true for the road warrior who doesn't have access to the company's license server.

And then there's compilers that hide part of their identity outside the filesystem (environment, registry, ...), can't be made runnable by non-priviliged users, etc.

Reply to
Hans-Bernhard Bröker

Those are non-issues. Either the original versions of the tools work, or they don't. If they don't, you have no choice. If they do, the risk of breaking things strictly outweighs the potential benefit of "being up-to-date" or "supported" by a wide margin.

... or it's a very subtle bug that just so happened to be cancelled by an equally subtle bug in the toolchain, with a net result of "works as intended." In a case like that, "fixing" the toolchain can break the code's runtime behaviour. Been there, done that.

Reply to
Hans-Bernhard Bröker

IMHO that's a bit over the top. Freezing the entire computer creates more problems than it really solves. Computers in storage will fall prey to an entire set of hazards of their own, and they're generally just too inaccessible on short notice.

Freeze the exact tool versions? Absolutely. A project manual listing all versions of all tools becomes part of the release configuration. That's that.

If you're really worried and the tools in question allow it, you can always go virtual, i.e. do the reference build in a virtual machine (VMWare or similar), then archive the VM image.

Reply to
Hans-Bernhard Bröker

The practical approach is: don't do that. Never run a changing system.

Reply to
Hans-Bernhard Bröker

I've heard of people who check in an entire VM which includes the toolchain and everything else need to build the firmware.

At the time I thought that sounded a bit over-the-top, and I also suspected that the VM would become obsolete/unusable before the toolchain would.

--
Grant Edwards               grant.b.edwards        Yow! World War III?
                                  at               No thanks!
 Click to see the full signature
Reply to
Grant Edwards

1) I freeze the toolchain at the previous revision. Indeed, sometimes the toolchain gets checked in to the source code management system. 2) I find out what's going on, because that should not happen*. Are you zero warning yet? Use a static code analyzer ( lint/Fortify/Klockwork)? *for 'C' compilers, at least - FPGA stuff might force you to upgrade....

-- Les Cargill

Reply to
Les Cargill

it is an excellent idea. Proof left as an exercise....

One interpretation of SEI level II is that you should be able to produce identical binaries straight out of the SCM system.

Could be.

-- Les Cargill

Reply to
Les Cargill

They're massively compressible.... and more than one project can point to the same toolchain. This is at least true for Linux or Windows with SVN....

And then they are not a vendor any more. Quote 'em chapter and verse of the quality policy in play while you fire 'em.

Those are fired, too.

-- Les Cargill

Reply to
Les Cargill

I agree entirely here.

There are also plenty of situations where toolchain changes cause issues. I've seen cases where one version of a toolchain had device headers defined mainly using masks and bit numbers, and the next version had moved to a bitfield struct arrangement. Or there was the classic case when MS changed the endian ordering of bitfields (not on an embedded compiler, but it's still an example of what suppliers can do). Most changes are subtle, but that only makes it harder to see the problems or identify what needs particularly careful testing.

One of the most common "it works with compiler x, but not with compiler x+1" situations is when the new compiler has a more aggressive optimiser. Baring the rare cases of compiler bugs, situations like this are always errors in the source code - but older code, code targeting poorer compilers, or code written by less experienced programmers, will often have such issues. It used to be that people were lax about using "volatile" - now you can see problems caused by people using "volatile" but not understanding the re-ordering of other parts of the code (modern cross-module optimisation makes this stand out much more). And even the most expert programmers can get bitten by strict aliasing issues.

Sometimes changing toolchains in a project is worth the effort - but it must be done with good reason, preparation, qualification, and testing. I have fifteen year old toolchains kept for fifteen year old projects.

Reply to
David Brown

Agreed.

I like to keep my toolchains in the same place on my systems, in directories with version numbers in the directories. So the toolchain is clearly defined in my Makefiles.

Yes, I've done that too. It works well, and the images are easy to pass around and copy. I use virtual machines for several different development environments - it's particularly useful for those annoying tools that insist on installing themselves on top of previous versions.

I recommend VirtualBox over VMWare for desktops, but that's perhaps just a matter of taste and familiarity.

Reply to
David Brown

It's not too bad. For example from a recent project a gcc-arm-elf chain is 33MB (stripped binaries). You could always delete the toolchain from the working file set while the project is dormant, then it is 16MB in the git repo. Just check it out again to recreate.

50x33MB is 1.5GB, that fits on the smallest flash drive anyone would still buy :)

Even if it were a hundred times bigger it would still be feasible, and laptop hard disk size would likely increase faster than I can write new projects.

But the other way to do it is install each toolchain to a standard location named after it like

/opt/gcc-arm-elf-4.0.2

And never delete it. Then in the makefile setup a pointer to that toolchain If you switch to a new toolchain, update the pointer and it gets checked in with that project version.

But there are other things that can vary in the toolchain like libraries so you have to be more careful. Or I suppose something like

/opt/gcc-arm-elf-4.0.2-20120117

Yes, well as a communist hippy gcc on linux user I am glad to say I don't have to deal with that nonsense :)

--

John Devereux
Reply to
John Devereux

Sometimes vendors try to make life difficult for us communist hippy gcc users, such as Atmel. On the one hand, they do good work supporting gcc for the AVR, then they ignore the express wishes of the AVR community (as represented by AVR Freaks) by making their IDE strictly windows-only. Then they release nicely packaged official builds of gcc for Linux - but badly packaged builds of gcc for Windows (with registry nonsense, upgrades rather than installation in new directories, etc.).

But I guess vendors like to make little challenges like this to make sure their customers have a certain level of knowledge and resourcefulness - it cuts down on support calls.

Reply to
David Brown

Yes they do seem a bit dual-personality there. I used to use gcc-avr (before any support from Atmel AFAIK). But have long since migrated to ARM for all new projects. The AVR architecture was getting a bit annoying by then, the separate code/data spaces in particular were unfriendly for larger C projects.

--

John Devereux
Reply to
John Devereux

I took a shot at ARM the other week, and my gcc-4.6.2 Cortex-M4 toolchain build got shot down by an ancient error -- Googleing the error message turned up trouble reports a couple of years old. I've got to follow up on this -- after I clear a couple of unrelated things out of the way.

Mel.

Reply to
Mel Wilson

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.