Can anyone tell me the format behind "Revision/Release" notation? For example: Rev. 1.30.35; which is so common in software (and some hardware). Is it an arbitrary system? I've tried searching for it, but come up empty handed. I've been able to query on this:
IEEE standard taxonomy for software engineering standards
but, I keep coming up with sites that want me to pay for the article. Not to mention the fact that the above article will have way more information that I'm looking for. I can't think of what else to search on.
The 4.00.02b notation is crap, and I can't see any patterns in actual use.
As engineers, we use revision letters for code and for hardware. A piece of embedded firmware is 28E346 rev A; the next release is B. All the source files are named in the same pattern... assembly source is
28E346A.MAC and the associated FPGA config file might be 28C346A.RBT. The shippable binary might be 28E346A.ROM.
A hardware top assembly could be 28A346-3B, where -3 is a version (literally the "dash number") and B is the rev. This is basic aerospace notation.
Before it defines a product, hardware and firmware documentation is formally released to the company library, with a genuinely useful README file, which library is where manufacturing always gets stuff from. And it's all tested *before* it's released!
We also require that all software tools be identified, version controlled, and released to the library too. So 10 years from now we can run one batch file to regenerate the whole build, and know we'll get exactly the same firmware, byte for byte.
I try to be a nice guy, and one of the ways that I try to be a nice guy is to be sensitive to those times when a vendor really doesn't want me to be a customer. When a vendor starts throwing out subtle "we don't want your business" clues, I do my best to find an alternate source, and allow the grumpy vendor to go on with their business free of an risk of getting my money.
Dongles are, IMHO, one way that a vendor screams "we don't want your business, thank you".
Control systems and communications consulting
I've seen a lot of different variations between industries, companies within industries, etc. Sometimes you'll see it expressed as "VxRy" or some variation that makes it a little easier to visualize. Basically, there are 3 "levels" of change. The top level represents basic, fundamental issues such as platform, core, basic features & capabilities, etc. The next level might represent "secondary" features, added to the primary ones at the top level. The 3rd level would be changes based on problem corrections, that don't add any particular feature or capability.
As an example, a company I used to work for used "V" numbers to define the core processor & basic architecture generation of the system. V1 was the original, 8080-based system; V2 was 8086-based & fit the same cabinets, but included some major changes to the inter-processor communications & disk subsystems. V3 was a complete repackaging, with upgrades to several subsystems but retention of the same CPU & inter- processor comm. V4 was a consolidation & downsizing. V5 was another complete repackaging with several upgrades to subsystems.
Within each Version 'x' were several revisions. Each revision level introduced some new major features..Within each Revision 'y' were potentially many lettered "Sub-revisions" that included bug fixes & sometimes a new, minor feature. Sometimes, a R-level upgrade required a corresponding hardware and/or firmware upgrade to go with it.
Maybe, maybe not. In the above example, V2R05A & V3R05A would represent identical levels of feature enhancements & bug fixes, but due to the base hardware platform differences between V2 & V3, neither would run on the other platform. Also, V3R07A might not be backward- compatible with V3R05C due to an hardware/firmware change(s) somewhere in the system, to accommodate features in R7 that weren't in R5. Generally, with a VxRy there was universal compatibility (i.e. you could go back & forth between V3R8x & V3R8y with no problem other than the possible reintroduction of bugs; sometimes a subrelease to "fix" one bug created another, creating a need to accept the lesser bug temporarily & revert to a prior level). Different "V" levels may also have different "R" levels as well. Sometimes, a bug appears entirely due to the change in "top" level so there's no corresponding need to "fix" it in a prior level, since it doesn't exist there. Also, prior "top" levels may become obsolete, with both feature enhancements and/ or bug fixes suspended.
The 'xxx.yyy.zzz' notation might represent similar levels, such as 'xxx' =3D base platform, core feature set, etc.; 'yyy' =3D feature additions; 'zzz' =3D bug fixes. Then again, it might not.
It can be useful if you exercise discipline. There are a number of different patterns in use. They overlap, so with your attitude you wouldn't be able to discern them. There are no standards, so any "pattern of use" is up to the authors of the software.
But as soon as Marketing starts using them, they get pretty close to crap...
_Any_ revision control system is only as good as the discipline and integrity of the people running it. If someone in the decision chain decides to chip rev B even though it's crap and no one can replicate it, then the "A, B, C" rev system becomes crap. If _everyone_ in the decision chain decides to do their job right then a "number, dot" system will work just as well as an "A, B, C" system.
So I don't buy your assertions in the least.
Control systems and communications consulting
What you describe is closer to the fuzzy 2.10.04b software convention. The mil/aerospace drawing control procedure isn't so much focussed on functionality or features, but rather in documenting *precisely* what was built, assuring the we understand the exact configuration of every unit in the field, and guaranteeing that we can *exactly* replicate, down to the last byte and tie-wrap, any previously built item. Anything less can kill people.
Only manufacturing builds and ships stuff. They get their docs only from the company library. Stuff gets into the library only by formal release. There are no sneak paths.
If _everyone_ in the
Most programmers like to simply toss the whole version control issue into a big automated VCS/SCM/bug tracking database thing. Lots of programmers are continually checking things in and out, changing stuff, and at some point marketing/management can't stand it any longer and spins off a release.
I wonder if the VCS itself can still be run 10 years later. I guess it doesn't matter, since hardly anybody supports 10-year old software. We sure as hell have to support 10-year old products, hardware and firmware included.
One important aspect of our doc control system is that nobody in the decision chain gets to arbitrarily ship selected code revs on any given hardware platform. Each shippable item is fully documented and controlled, and only a formally released ECO, signed by the President, allows exceptions.
This isn't hard to do. It's easy, considering the confusion that will result from not doing all this right.
So, let's change the venue, and reiterate what you're saying, and analyze it's validity:
"Apples are picked in the next farm over from me, placed in wicker baskets and hand carried to the farm stand. I buy them and they taste delicious."
"Oranges are picked all the way over there in Florida, put into cardboard crates along with sharp rocks, and are shipped by unsprung trucks on dirt roads. Then they are tossed off of these trucks into my local supermarket's produce aisle. I buy them and they are bruised up and taste terrible."
"Therefore stuff that comes in crates is crap and stuff that comes in wicker baskets is excellent."
Does that sound like your claim?
So I agree with your _evidence_, but I disagree with your _conclusion_. You may as well say that because you paint your office blue that only blue-painted offices can generate high quality work.
If you have a software code base that has more than a couple of dozen source files and more than one developer, then you simply aren't going to be able to adequately keep track of modifications with anything other than a good version control system. The stack-o-floppies works for one or two source files and one developer, but even then it works if and only if discipline is exerted in the development and archiving process.
(Note that I _always_ use version control for all my client's projects, even though I'm the only developer and some of my client's code bases are only a few files).
Trying to maintain any sort of order in a software development effort that has multiple source files and multiple developers _without_ a good version control system is damn near impossible. If your source is distributed between a dozen people and resides on multiple directories, floppies, CDs, and memory sticks, then you'll never build the same thing twice, and you'll never get a coherent handle on what you're doing.
I would contend -- totally opposite of your assertion -- that you cannot produce high quality software above a certain size _without_ a VCS. But then, I think that if you tried it with the pile-o-floppy method and the level of discipline that you rightly require, you'd figure this out pretty quickly.
What gets you your high quality is the discipline that you exert. The language of the labeling is as superficial as the color of paint on your walls.
Control systems and communications consulting
We manage software the same way we manage hardware, with the same revision control procedures, the same release procedures, the same rules, the same numbering system. We require that every shipped product will be exactly reproducable and maintainable for decades. Since disciplined hardware configuration management has been solidly successful for 60 years or so, and is mandatory when lives and gigabucks are involved, why not?
Thank goodness that most embedded product programs don't need an army of programmers and a VCS to keep them under control.
I don't like the Linux-style version numbering either, but it has the advantage of giving each of probably thousands of builds a unique identifier.
I use git for version control of everything. I standardized on it a couple of years ago--previously I used a set of scripts to generate zipfiles from file lists, which worked fine since almost all my stuff is done solo. I use it for code, schematics, docs, books, articles, patent disclosures, drawings, just about everything.
git is amazingly fast and easy to use, runs on most platforms (though you do need cygwin if you're running Windows), and--crucially--is easy to debug and back up.
Backing up the tools is a bit more problematical, though--you often need the right OS revision as well, due to API and library changes. That's one of the wonders of self-contained, statically linked executables--no library worries. All my simulation apps are console-based for the same reason.