In search of (agile?) change management resource

I am looking for a tool to facilitate maintenance of a firmware-based product, but I don't know whether it exists, nor do I have a name for it. Ideally it would be vendor agnostic as I never wish to be covenant-married to any particular processor, language, or IDE.

I will illustrate a typical problem (greatly simplified) from a real-life example.

Let's say my executable comes from 2 source files (in C), holding 3 & 4 functions; respectively:

Rev 1.6: +------ File1.c | +---- File2.c | | - - -------- x . x . func1a() x . func1b() x . func1c() | | . x . x func2a() . x func2b() . x func2c() . x func2d()

The lines refer to the declaration section at the top of the file, which is outside all function definitions.

This is the way it looked when I released revision R1.6. My manager wanted me to follow that up with inclusion of 2 new features:

H New hardware: The new platform has more I/O ports, more complex signal processing,

I Input Sampling: Due to a misunderstood requirement, I had been sampling 2 input signals at the same frequency. When one of them needed more noise filtering while the other needed quicker response time, I had to split them into separate processing pipes, with new names.

In my revision control system, I give it a prototype revision number: R1.6.HI.

The changes ripple through the design in a crazy-quilt kind of pattern, Along the way, I have to add a new function to File2.c and other functions therein grow until File2.c becomes unmanageably large. I have to split it into 2 files of 3 & 2 functions each, keeping the old function names:

R1.6.HI - R1.6: New hardware -------+ Input Sampling ---+ | | | +------ File1.c | | | +---- File20.c | | | | +-- File21.c | | | | | | | - - - ---------- - - x . . . . x . . func1a() . . x . . func1b() . . x . . func1c() I . | | | | | . x . . . . x . func2a() . . . x . func2b() . H . . x func2c() I H | | | | | . --> I H . --> func2d() . H . . + func21e() I .

Say after 3-4 weeks of effort I reach a point where this builds without error and appears to run correctly, but I have not completed regression testing and don't consider it releaseable into the wild.

At this point the service manager returns from the field saying one of the user inputs needs an abnormally long debounce time to fit industry standards. My manager says this request trumps what I am working on, so I must stop what I am doing and get it out NOW.

I must make my changes to R1.6 (the lastest release). This time, File1.c becomes a monster and I split it.

R1.6.D - R1.6: Debounce ---------+ | +------ File10.c | | +---- File11.c | | | +-- File2.c | | | | | - - - ---------- - x . . . x . . func1a() . x . . func1b() D | | | | . --> D . --> func1c() . . . + func11d() D | | | | . . x . . . x func2a() D . . x func2b() . . . x func2c() . . . x func2d() D

I get that tested, release it as R1.7, and return to my prior task, which now must be patched into R1.7.

At this point, what I really need is a utility that would allow me to implement the top-level command:

R1.7.HI = R1.7 + (R1.6.HI - R1.6)

where each variable above is a rather complex, multi-level structure of base type char, but having elements involving files and functions and lines. Without it, I must either:

- Throw away the 3 man-weeks I spent creating Rev_1.6.HI and repeat that task on a new base, or - Attempt to execute the command above manually, (Keeping track of all the levels can get tedious).

A couple of years ago, I thought a decent revision control system ought to be able to handle this, since the users can be forced to supply all the info necessary to drive it at check-in time.

The most difficult sub-task is isolating changes to individual functions, but most revision control systems come with a file differencing utility that goes a long way toward providing this feature. I have yet to find one, however, that can handle functions migrating between files due to re- factoring.

Although I made this example extremely simple, just representing these relationships is difficult in the 2-D system I have to work with.

Furthermore, writing this down makes me realize my problem is in the tools, for whom the smallest unit of analysis is the file; whereas in my source deck, the "atoms" are really functions.

So I'll stop the example here and ask: - Does such a tool exist? - Is there a generic name for it? - Where do I learn more about this topic?

Any ideas? ============================================================ Gary Lynch | To send mail, change no$pam gary.lynch@no$pam.org | in my domain name to ieee ============================================================

Reply to
Gary Lynch
Loading thread data ...

[%X]

[%X]

[%X]

It takes a bit more than that.

This would be a good read to start with to see if any of the Configuration Management tools might offer some sort of assistance.

However, Configuration Management alone may not help you out with this sort of problem. Certainly not as simply as you seem to imply you would like it. Perhaps taking some time to think about the way you structure your systems is called for. Would Object Oriented, Component Oriented or similar techniques be helpful? Would a different architectural approach to your systems be useful?

Based on just a glance through the problems you pose, then I would suggest that the re-assessment of architecture of your systems, and employing reasonable configuration management tools in your development process may be the answers to your needs.

Dealing with the code in terms of atomic functions you should probably think about ensuring that a collection of atomic functions in a file are closely related such that they can be considered as aspects of the same idea.

Take some time to think about the five views of your system design: * The Logical View * The Process View * The Development View * The Physical View * The Use Scenarios

These are explained in many System Development Texts (especially those dealing with IEC61499).

--
********************************************************************
Paul E. Bennett...............
 Click to see the full signature
Reply to
Paul E. Bennett

I have heard the Git fanatics claim that it is much better at this sort of thing than SVN or any other traditional revision control system, because "GIT works with patches".

It sounds like one of those wild-eyed "XYZ technology will save the world -- and make your girlfriend smell nicer, too!", but it may be worth looking into.

A more traditional solution would be to pay more attention to how you are architecting your source files, to keep related stuff related and unrelated stuff in separate files.

I suspect -- all GIT-fanatic claims taken with due consideration -- that it's a tough nut to crack, no matter how you approach it.

--
Tim Wescott
Control system and signal processing consulting
 Click to see the full signature
Reply to
Tim Wescott

CM Synergy from Telelogic seems to fit the bill for the most part; since changes are always done in the context of a task, it is relatively easy to work on different tasks in parallel (by the same or different persons) and include or exclude specific tasks for a specific release. However merging two parallel tasks touching the same objects (=files) with one task involving a complete restructuring of the code will be anything but automatic; it will require quite a bit of effort and attention of the one doing the merge.

Though CM Synergy is quite powerful, especially when used in combination with ChangeSynergy, I'm reluctant to recommend it because its a rather heavy/complex tool with a horrible user interface and it can get rather slow when the project gets large. Though CM Synergy can handle complex scenarios it will become a full time job to manage this tool real quickly; on projects where I have seen this tool had at least on person dedicated to configuration management. For small projects ( - Is there a generic name for it?

Configuration Management.

I guess googling for Configuration Management turns up more than you could ever read. (sorry I don't any specific links at hand)

Reply to
Dombo

Ah yes, the Holy Grail of software development...some call it the Magic Bullet.

Many years ago I worked on a mainframe OS, which had a build control system working at the function level. I have not seen anything like it since. All VCS systems I know work at the file level, and since this is how C works there is little incentive to invent something better.

There are plenty of VCS systems that allow all manner of branches and sub-branches, but I have never found anything to help much with the task of integrating them back together, apart from a good merge tool and man hours. It is best to avoid the situation by efficient planning of the development - this may equate to what I call "managing the manager".

Reply to
Bob

[...]

The root of your problem would seem to be this very way you organized your functions into files.

The fact that these two changes each affected several functions spread over the same two source files indicates that the module structure was in severe need of re-factoring before this all began.

At the very least, it would have been highly advisable to disentangle the changes needed for these two features. Otherwise you'll have a serious problem on your hands as soon as somebody wants 'I' on the pre-'H' hardware. I.e. there should have been separate '.H' and '.I' versions of each file, and the '.HI' version should have been arrived at by merging those two.

I.e. you need to merge changes from two different branches (1.7 and

1.6.HI) off the same common ancestor (1.6) to arrive at a new version

--- what I tend to call a three-way merge.

Any revision control system worth its salt would have a command for that, or at least allow you to just use stand-alone tools for that purpose --- on a file-by-file level, that is. Moving stuff from one file to the other tends to prevent usage of any such tools, so your best bet would be either "Don't do that, then!", or at least, if you have to, do it _before_ you make substantial changes to the functionality of the code.

You'll likely be down to generating diffs and editing those into the individual files somewhat manually, or at least interactively.

There's no particularly strong reason for those types of atoms to be so different. "One file per function" is not always a good idea, but for your way of coding, it might be better than the way you appear to spread aspects of functionality across functions, and functions across files, right now.

Reply to
Hans-Bernhard Bröker

[snip]

I agree. Configuration management needs to be in terms of the smallest unit that might need to be configured.

If you carry this out the the extreme, you need to configure arguably individual tokens in the langauge (and if you go nuts, individual characters). That's difficult to do as a practical matter, so somewhere you have draw the line about how much stuff is in a single group and is managed by the configuration management tool. All other configuration management below that level has to be handle by (preprocessor) conditionals :-{

Present tools do this by implicitly assuming that "file" is the right unit to be managed, and that's much of your complaint. The real problem here is that the software engineer decides arbitrarily what goes into each file, leaving the configuration management tool not a lot of choice.

Your suggestion of managing "functions" (I would have said, "declarations") is IMHO a better one but not one we're likely to get the existing configuration management tools to change. Part of the problem here is that configuration tools don't have any sense of "boundaries" in terms of language constructs.

One step in that direction would be "diff" tools that do understand language entities (identifiers, expressions, statements, blocks, functions) and the actions applied to them (insert, replace, delete, move, copy, rename). See

formatting link

--
Ira Baxter, CTO
www.semanticdesigns.com
Reply to
Ira Baxter

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.