Dilbert strip

Scott Adams really understands the realities of engineering.

formatting link

I'm designing a really hairy thing right now. My choices are to create a schematic and assume that people will reverse engineer it some day if they need to understand it, or I can really document the choices and the math. I don't want to clutter up the schematic (it would be ugly and wouldn't fit anyhow) so documenting will have to be a separate document, with references to outside things like data sheets and roughly 20 Spice sims and app notes.

(I sometimes see other peoples' schematics with random notes, like "remember to xxxx" which I think is tacky.)

I also need to send detailed architectural concepts to my FPGA folks (one person will do the ghastly SOC design, another will do a small/fast auxiliary FPGA) and to my embedded c guy, and possibly a contract embedded web page designer (javascript programming mostly annoys my software guy) if we can find one who can understand the instrument.

Communicating with the code people could be done by email, or in a lot of whiteboard sessions, which I can at least photograph.

So the issue is, should I create (and keep accurate) a gigantic design notes document?

The VHDL folks are opposed to block diagrams, but I think I should do them anyhow, so future engineers don't have to decode thousands of lines of VHDL to understand what's going on.

What do you do to organize and remember a complex design that involves hardware and software, and several circuit/maehanical/software engineers? Will someone (even you) be able to make a change three years from now without a lot of detective work?

I certainly enjoy drawing schematics and exploding parts more than typing Word docs, but this product should last many years and needs to be maintained.

We often work with other, bigger companies on joint designs. They seem to have lots of meetings, go off and design things, and keep no record, and certainly no accurate record, of the design process or the higher-level architecture. After a few years of turnover, nobody around remembers the assumptions. New engineers typically don't even get the emails created by their predecessors. Sometimes we are their only available institutional memory!

--

John Larkin         Highland Technology, Inc 

lunatic fringe electronics
Reply to
John Larkin
Loading thread data ...

An interesting question. The modern software approach is TDD, wherein the tests /are/ the documentation. That can have some advantages over previous softie practices, but the softies look blank when you mention "you can't test quality into a product". Spit.

I remember, gulp, 30 years ago, finding out how the Boeing 777 was developed. Basically they had one enormous hierarchical requirements specification, and a separate enormous implementation architecture specification. Naturally there was only limited hierarchical correspondence between the two hierarchies.

The only bit they chose to automate was the database relating where each requirement was implemented.

Obviously that's not an answer to your question, but it is an interesting (and perhaps surprising) perspective.

Reply to
Tom Gardner

Like the people who say "the code IS the comments." All 58,000 lines of it.

--

John Larkin         Highland Technology, Inc 

lunatic fringe electronics
Reply to
John Larkin

There's worse. I've seen people strip all comments out of the code on the basis that sooner or later the comments will be out-of-sync with the code. While that can be true, they did it because their conslutants (sic) told them that was The One True Way.

If they had bothered to read the comments, they would have seen they described /why/ the code was like that, as well as how to use and not use it.

Talk about throwing the baby out with the bathwater.

Reply to
Tom Gardner

That's like programmers calling the comment lines in their code "documentation". On the topic of hairy things, that makes my neck hair stand up.

What I found to be really useful is online conferencing where you can share documents and give mouse and keyboard control to everyone. Can often be restricted to the screen at hand so nobody could open your email while you get a coffee. I like GoToMeeting which we use with a start-up. Much faster than back-and-forth email.

What's still missing is interconnected virtual whiteboards.

If it's a large or hairy design, by all means yes. It's not just for all the other folks. I often have to revisit design from a few years ago because the client wants some new features. The original module spec greatly helps me get back into that particular design.

On any schematic more complex than postcard-size (and sometimes even then) I write a module spec whether the client wants it or not. That contains an executive summary, followed by the specifications (similar to the tables in datasheets), then a chapter with a block diagram and an architecture explanation. The next chapter is a big one with sub-chapters and detailed descriptions of every circuit in there along with schematic snippets, why I designed it this way and not the other way, and so on. Then come the software guidelines where register settings, code architecture and all that are explained. Next is the chapter with layout guidelines, outlining RF-critical stuff, high current traces, maybe an example layout of a really hairy section. Next up is the mechanical stuff. Last is a chapter about how the circuitry could be redesigned in the future, like when production volume goes up big time and EE time pales in comparison to saving $10. Depending on what it is there's also a chapter about regulatory.

In medical or aerospace that can get people close to a major lawsuit or into a pickle with the Federales.

I've had that, too. Having grown up in med tech where a design history file is mandatory I document a lot. One day the phone rang. "We have a major EMI issue and seem to have lost the documentation for the XYZ board" ... after several minutes it dawned on me that this was an acquirer of a client from when I was really young. Luckily they had not pulled the trigger on the doc destruction order after the design back then so I could piece it back together for them. And get the EMI fixed.

--
Regards, Joerg 

http://www.analogconsultants.com/
Reply to
Joerg

Welcome to the modern world where nobody gives a shit because they want something "new" yesterday. The throw-away "economy". No more old-world QUALITY counts mentality.

Reply to
Robert Baer

I have seen that happen. So you have a few hundred modules that have no comments, no statement of who wrote it when, no statement of what this thing is or does.

That's my current concern, how much should I document the current design, other than the obvious schematic and BOM and source code.

We have old designs where we ask "why the hell did he (or we) do that?

--

John Larkin         Highland Technology, Inc 
picosecond timing   precision measurement  

jlarkin att highlandtechnology dott com 
http://www.highlandtechnology.com
Reply to
John Larkin

Oh, the source control carefully captures who makes what changes. /Why/ is a different issue, of course!

Source code control is easy with text; being able to have the differences highlighted & compared is the touchstone consideration. That's a serious problem with most schematics and other visual representations.

Large FPGA toolsets are also problematic, since it is damn near impossible to discover exactly which files are and aren't required to recreate V5.6.23 of a design.

If you can get a client to tell you, that punts the problem to them.

If not, then clearly it isn't (going to be) important to the client, so you are only documenting it for yourself. At that point it is a relatively standard question of personal engineering judgement. Part of the judgement can be based on the usual considerations: - being runover by a bus, - I'm away on holiday, - I'm too busy or bored to revisit it, so I'll get someone else to do it

I've found it sufficient to mentally put myself in the other person's shoes, and to write down what I would like to be told about the design.

Do I get it "right"? No, I do not. But then I don't get it very wrong either.

Reply to
Tom Gardner

I'm employed in a big company

We have design rules for all steps in the development process, and we canno t launch a product without a thorough review of a design journal, which mus t document all design considerations. The process is rigid, at least for HW and Mechanics. For SW, I think they have some of the documentation generat ed from the actual code

Apart from the design journal, we have a lot of other production oriented d ocumentation that must be completed. When the product is finally released f or production, the continuous improvement department inherits the documenta tion and must keep it up to date

I am more fond of a dynamic work environment, with less documentation. I ho pe that it will shift more to that, to focus on more actual electronics wor k, and less documentation

Cheers

Klaus

Reply to
klaus.kragelund

I stopped using Subversion because I couldn't figure out a way to use it the way I learned version control. Each file should have it's line of revisions. When you wish to establish a baseline a label is added to all files indicating this is version X. In Subversion every time a file is checked in the revision number for the entire file set is given a new number. Obviously there is a way to use this, but it just seems alien to me.

But the point is a version control system should provide you the means of building version X... IF you also track your tools. With FPGAs there can be significant changes in the results if you use different versions of the tools to compile the same code.

What about looking at two year old code and thinking, "What the hell does this do"?

--

Rick C
Reply to
rickman

I worked on a large government project where the comments *were* the documentation. The programmers were required to document in their code to some specification. These comments were added in a manner that they could be automatically extracted and pulled into a document. Believe it or not, my huge contribution was adding the page numbers automatically by computer! That's right, we had been adding them by cutting and pasting paper!!! The guy in charge of the documentation was not a computer guy by any means and didn't want me to work on it. So I had to sneak the work in. This was all coded in VMS job control language too! lol

Sounds like an invention waiting to happen.

--

Rick C
Reply to
rickman

So how was the software architecture explained at the top level?

Yikes ...

You probably were their hero for a while :-)

In my line of work document via source code comments would not fly. The Federales would have us over the barrel. But the worst case would be a liability event followed by legal proceedings. "What are the interlock mechanisms preventing the wombombulator to come on when widget loop B stalls?" ... "Ahm, ahem, well, let me research that and report back later".

I've suggested it many times. Once they even invited me as a paid adviser but I didn't bite because progress in that world is way too slow for my taste.

--
Regards, Joerg 

http://www.analogconsultants.com/
Reply to
Joerg

I've been involved in both extremes. Design by a bunch of back and forth emails to detail down to the torque value for the mounting screws. I have found the points in between to work well. My goal is to be able to hire someone and not have to spend days of my time bringing them up to speed decyphering notes in a 3-ring binder.

I start with power point or something like it. Big picture bullet points - a story of what the product is. Block diagrams work well here. I find it also helps new hires understand the individual parts of the overall product. Dont delve into chips and Rs and Cs here. What are you building, why, performance goals, environmental requirements, etc. If you are doing custom enclosuers this should be in the big picture. Maybe the requirement is a 2U 19in rack or VME chassis or ??.

You have "top" level entities anyway, so the blocks are there. The docs should not get into "how" they are implementing the function in HDL but the why and what the ins and outs are. Some modules may need more detail documentation, say a crypto function or some top level goals for a memory controller. PCIe must support so many lanes, etc. This is all part of the fpga document(s).

This is where you really need a interface spec that everyone agrees to before hand. Before any line of code (c or hdl) is written. Register definitions, dma operations, memory configs, I/O operations, other functions (can sw reset the fpga and how). Those kind of things. With this documented your SW guys should be able to come up with a sim environment where they can start writing code long before a working fpga is ever in place. You mentioned java script or a web interface. Your restful api needs to be in a interface spec. If one knows the embedded code will honor the interface spec a person can rather quickly mock up a back end and start work on the JS UI.

You also need the mechanical docs. Unless you are bending metal inhouse you will have some type of requirements document anyway. My most recent was a directory with the 2 page design doc (size, openings, metal thickness, special fasteners) and a bunch of Solidworks Edrawings (of course we also had the full file set).

I think notes on schematics are important but not a note for notes sake or a note that says "fix this someday". That goes into the bug tracking system. One product I did had a input from a companion device. The note said if the input was zero the other device was not present, if between 2V and 4V the other device was present but it's main power was off line. If > 4 volts the device was present and it's power was good and it could power share if need be. That's a note block on the schematic. Makes it clear to the reader what is going on. I like to document cross sheet signals if they are doing something special. You dont have to chase the schematic sheets to find the source. Design review should catch errors in the notes.

This is where the bug tracking/CM systems comes into play. It does not have to be some mega bucks system. Bugs and change request are tracked in the "bug" tracker. When to pull the trigger on everything must now go through the tracking system is debatable. Some people like to establish a major mile stone, say initial board layout is done or the mechanical is done. When layout sends the first board out to fab from that point on any board changes go through CM. Mechanical would also go under CM at that point because a mech change may impact the layout. FPGA would probably be under CM now as a pin change would impact layout. They key is the changes are tracked. FPGA foks need to swap a pin, they open a ticket for layout. Layout now has an assigned task and can start working with the FPGA folks. No missed emails or water cooler conversation. You also know and can go intervene if necessary.

A very important tracking ticket is a ticket that has sub-tasks that must be signed off for a release. Doc review, qa testing, emc testing, etc. Only when all the sub-tasks are signed off can the main release ticket be closed and the product shipped.

As a owner / manager there is a fine line between bogging down the design process and having good documentation. I look at it that if I loose an engineer, lead or entry level, how much of a hit is my business going to take. How fast can we get the design process back on track with a new hire. Is it months or days? Only you can decide and that directly influnces the doc process.

--
Chisolm 
Republic of Texas
Reply to
Joe Chisolm

My condolences! :>

Ask one of your software guys to point to the *first* instruction that is executed in "product A" (should be easy, right? sort of like finding where power comes into a schematic... :> )

Reply to
Don Y

I require that a product release be real files, not some fuzzy stuff lurking in a version control system database. I have no confidence that the VCS will still be up and correct six years from now.

People can use VCS during development (I don't) but the deliverable is real files in real folders.

Programmers love to use fancy tools, and preferentially write those tools themselves, to automate a little documentation grunt work. The tools can't work well, because they don't know why things are done.

It's not so good with PC boards! We release a "Rev_A_to_B" readme file with every PCB release.

Or what mouse clicks were used to build it.

For standard products, the client is us.

--

John Larkin         Highland Technology, Inc 
picosecond timing   precision measurement  

jlarkin att highlandtechnology dott com 
http://www.highlandtechnology.com
Reply to
John Larkin

You don't think there was a top level program file? I don't know the details. This was many years ago and I was not in any sort of management capacity.

I was using it to make my work not only quicker, but more sane and he saw what I was doing. No looking back after that. Everyone shifted gears and we were off.

Once some pages were generated, we sometimes had to insert entire sections as page 127a, 127b, etc. Once we got to 127z and I stepped it to 127aa. He thought next would be 127bb, 127cc while I had coded it as

127ab, 127ac. But he saw the wisdom of that too.

Mostly we got along well because I had no ulterior motives and was not trying to screw anyone.

It wasn't so much that the comments served as documentation. The documentation was done in the comments of the files. I believe they may have created the comments first from the requirements as well. Then the routine was coded to support the comments. I never read them so I don't know much about how good they were. But they got a lot of work done and the project was on time and on budget (mostly, after all, the government won't stop modifying as you work).

??? I would love to work on an idea like that as a consultant.

--

Rick C
Reply to
rickman

That's roughly what I did. It kept on getting re-written as the design prog ressed, and the implicit requirements became obvious, sometimes generating contradictions that had to be resolved.

This bit usually got written by the specific engineer designing the module. Getting them to write it required persistence, and management always wante d them to be working on the next project rather than documenting the existi ng one.

Documentation could be measured, but never seems to be. You don't get money for having it or selling it, though really sophisticated customers could i nsist on it.

Lawsuits are driven by lawyers, who understand documentation as a general c oncept, but can't extract anything actually useful from technical documenta tion.

When EMI sued RCA about colour television, the lawyers found a very useful difference in the language used - one side talked about quadrature modulati on and the other about sine- and cosine-modulation, and the judges couldn't be persuaded that they were the same thing ...

Cambridge Instruments threw a few instances like that my way. I had to piec e together what the long-departed designers had had in mind, strip out what the ingenious final test engineers had had put in to make their lives easi er, and create a modern equivalent of the original design (which wasn't alw ays as simple as the original).

--
Bill Sloman, Sydney
Reply to
bill.sloman

o:

SysML

Bye Jack

Reply to
jack4747

Dataflow diagrams work for some general systems problems and you really only need to document the overall architecture and any really sneaky tricks used that won't be obvious to later engineers. Or to you when you come back to it in five years time to do a major revision.

One trick when being forced to supply the sourcecode to an untrusted customer site is to remove all the comments beforehand.

One particularly nasty one I recall in a 1980's Unix kernel for 68k where the reason for the exact choice of instructions was deliberately not commented by the author with the intention of generating business when someone "optimised" it without understanding why it was like that.

The key to comments is that they should explain what the routine expects as input what it does (sometimes how & refs) and what it yields as output. On anything significant the top of the source file includes as comments a revision history for bugfixes and improvements.

The sorts of comments that really drive me crazy are:

x++; // increment x

fft2d(x,n,1,0); // call fft2d

sooner or later you find something insidious like:

x = x+l; // add one to x (hint: *l*ook very carefully)

--
Regards, 
Martin Brown
Reply to
Martin Brown

I presume you use a non-proprietary source code control system. If you rely on a commercial tool, then you are hostage to the company going out of business, or being bought by Oracle (so the new licence costs 25% more than your budget), or the "Microsoft Plays-For-Sure syndrome".

In either case, VMs can help; see below.

For handing over to an external client, maybe, depending on what the client wants. Internally that's less clear, but I wouldn't argue against it.

When using a source code control system, it is possible to tag/branch (mechanism depends on the specific SCCS) the complete set of files used to configure/specify/define the release.

Recreating the release consists of pulling the set of files with that tag.

More interesting is the strategy you choose for what's in the mainline and branches. There are a couple of reasonable alternatives; which is better depends on the product's lifecycle.

Well, you have to carefully choose your tools, and then learn how to use them - no surprises there. For software, that isn't difficult and should avoid most of the NIH triumphant reinvention of a square wheel.

In that respect electronic engineers are far worse than softies, for the same reason that softies create "novel" hardware.

Yeah; that sucks. I don't know a better way, though.

Oh god, yes.

All serious software development is done with tools that are command-line scriptable, even if there is a GUI invoking those tools. That's vital for unattended compilation, automated tests, packaging, release and distribution.

"Software" includes FPGA source code - and that can be a problem with vendors' megatools that spew files everywhere; you need to be able to identify all the "root" files and the "dependent" files, and only keep the "root" files in the SCCS.

Keeping the entire toolchain together and operable is an issue. Fortunately that can be largely solved by having the toolset running inside a virtual machine. The entire virtual machine (i.e. the files in the host operating system) can then be archived to DVD etc, and can be retrieved and instantly re-run later, including on a different computer.

Reply to
Tom Gardner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.