What Software Engineering Process is best suited for Embedded Projects

Let me zip up my flame suit....there ok much better.

First, you need a domain expert, and the engineers will need to have a very good understanding of what is trying to be achieved. Second, I suggest you get QA or someone who's full time job is related to verification and validation, involved at the beginning. They need to understand the problem as well as the engineers (how else will they write good tests?).

Demand your engineers to validate their code with unit testing before it gets released into a build. Don't let everyone dump their stuff into a single development branch. Reserve a "stable" branch for QA to use, and only add to it when QA is ready to test new features/fixes.

Don't expect anything great from tools that make UML diagrams. Keep the paperwork to a minimum, and only document something if after reading it, an engineer can say "Yeah, that is actually useful for me to know, and it'll help me in 6 months when I come back to this area of the code".

Get your technical writer involved immediately. They need to underatand the problem too, otherwise they'll be nagging your engineers later when they're busy fixing bugs.

The most important thing is to push back as much scope and features as you possibly can. Don't volunteer to prototype or demo anything unless it's demanded. I can't stress enough how these activities distract engineers from getting core functionality completed and tested. Demos tend to need bells and whistles, and they generate new demands that marketing suddenly seems to think is a must have for the first release. Stay focused on the smallest subset of critical functions that you can get away with.

Now here's where I need the flame suit (and I am a developer btw). Every developer should have the exact same environment. Everyone uses the same tools, the same build scripts, etc. Get everyone their own dedicated development box (separate from their R&D and general everyday tasks). Avoid proprietary methods from the start. There will be a lot of whining over IDEs and editors. Tell them to shut it and deal with it. Give them x number of days to decide which tools they need to use, but after that, everyone does it the same way. If you have to bring new people onto the project, it will make things much easier to get them up to speed, and they can get help from any of the team members, not the same guy over and over.

I guess to summarize:

  1. Minimize scope
  2. Mandate clear understanding of what is trying to be achieved by all members of the team
  3. Begin QA activities on day 1
  4. All developers use the same tools/compilers/etc
  5. Mandate unit testing before code is released to the QA branch
  6. Don't release until it's stable (requires yelling at your boss)
Reply to
microman
Loading thread data ...

Only a couple of things I would debate but overall this seems like good advice born of real experience.

Ian

Reply to
Ian Bell

In my opinion you're totally on the wrong track from day 1. The model is irrelevant if the team doesn't gel, and the team is irelevant if the project planning is poor.

  1. make the interface between the different sub-systems as minimal as possible, and thoroughly define this first.

  1. Create a device independant test fixture that can certify the interface from any sub-system, and create this first.

Create suitable test fixtures and procedures for each sub-system, then incrementally verify each significanyt stage of development.

  1. Plan and meticulously specify the function of each sub-system. The teams are now free to utilise whatever methodolgy suits them best. I assume that the 15-20 is split into smaller teams, one per sub-system or whatever. One large team working on everyhting seems assinine.

Al

Gerald Maher wrote:

Reply to
onestone

I have seen two companies choke and die as a result of overly dominant QA, with insufficient actual project development.

Documentation is a necessary part of any project, but it can stifle them as well when taken to extremes.

Al

Reply to
onestone

Spot on post from microman. In his post I'm assuming that there's a good workable Software QA Plan in place before one starts. What did you see as debatable?

Ken.

+====================================+ I hate junk email. Please direct any genuine email to: kenlee at hotpop.com
Reply to
Ken Lee

This is an interesting statement -- you may be delivering product faster, but how can you assess that your deliverable has attained any level of quality if you don't have any "quality" goals.

Ken.

+====================================+ I hate junk email. Please direct any genuine email to: kenlee at hotpop.com
Reply to
Ken Lee

a) 'Demand your engineers validate their code..' Demanding anything is not the way to create and maintain a healthy team spirit.

b)' Get your technical writer involved immediately' - not until at leat the spec is firmed up.

c) 'The most important thing is to push back as much scope and features as you possibly can'. This can only lead to a 'them and us' relationship with marketing. IMHO it is a knee jerk reaction to the typical demands of a marketing dept but not the solution to the problem.

Ian

Reply to
Ian Bell

I didn't say we did not have any quality goals just that there is no formal quality system. The problem generaly is not quality itself but rather the systems for achieving it.

Ian

Reply to
Ian Bell

The documentation update is supposed to be revisited every time you make a change to the product. It is just one more item on the checklist at the end of stage review.

Part of the end of stage review is to make sure it all matches. Otherwise the review has failed. Documents must be updated before product modification as you need the documentation to ensure inspect/test against. It is not permitted to just check the update. It is a re-test of the part, assembly, unit, whole system.

Hence my building in the need to be able to manage the changes that will arise out of the problem reports (my third point).

In my environemnt theclient will sign off on it too. However, as that is at the high integrity end of the market (some life-critical) it is rather understandable that they feel they should.

Interesting description of the "Configuration Manager". Yes, such a post holder does need to be a little bombastic but he does need to be polite enough about it as he is also part of the team.

Of course, here we get into the realms of team make-up and I always favour the "Surgical Team" approach (multi-disciplinary). A small number (no less than three and no more than 7) is best for the really high integrity stuff.

--
********************************************************************
Paul E. Bennett ....................
Forth based HIDECS Consultancy .....
Mob: +44 (0)7811-639972 .........NOW AVAILABLE:- HIDECS COURSE......
Tel: +44 (0)1235-811095 .... see http://www.feabhas.com for details.
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************
Reply to
Paul E. Bennett

Things that are "supposed" to get done, don't. It's a fact of life. Pragmatism versus idealism. I've never had a "stage review" and am not even sure what it really is.

I've often been in situations where documentation is supposed to be updated but it isn't. And I've seen this cause tremendous problems. Thus my advice - if a process involves onerous amounts of work, then chances are it won't get done properly, so make the process easy to follow.

Example: I add a new command to the command line interface, something used only by developers. I now add half a paragraph to a document that wrote, and take it to the official documentation person (not an engineer). She says I need to fill out a documentation change request. First line on the request wants a documentation number, so I ask how to get a documentation number. She says to request one. From who ask, and find out it's from her. Can she just fill in a number when I hand her the form? Etc. At some point I petition my boss to just remove the convenience feature rather than have to update the documentation. (On the other hand I actually _tried_, and I had one of the only software related documents that was official)

I'd like to work for an idealistic company, one with well thought out processes that actually work. But I've never been at such a place. Sometimes I wonder if they even exist anywhere outside of USENET.

We referred to these as "release managers". Sometimes they were good, sometimes mediocre. They're high visibility jobs, so sometimes people forget that getting the job done is more important than looking good (the bad managers are the ones that refer to "my release").

--
Darin Johnson
    Caution! Under no circumstances confuse the mesh with the
    interleave operator, except under confusing circumstances!
Reply to
Darin Johnson

Hi Ian, Addressing your points:

a) I think you're right - "demanding" anything is not the right approach. Actually , why is it necessary to "demand" that engineers should "validate" their code. Isn't this really a requirement from the Software QA Plan (and V & V Plan). If a certain level of "quality" in the deliverable is desired then this governs the extent by which the resultant code is "verified". It is possible that schedule timelines could blow-out due to anal over-verification especially when the verification has to be done manually and not through an automatic test harness. Don't get me wrong, engineers usually have a "personal process" by which they ensure the desired quality (which is commendable), but every engineer is different & so a quality plan is required to ensure the lowest common denominator is achieved. As I see it, once the quality plan has been established then mandatory work instructions can justifibly be put into place.

Incidently the software engineer may "verify" his code (not validate), but possibly he could have his design validated (against requirements). In my area, the medical field (as dictated by FDA guidelines), "Software Validation" is NOT carried out by the software engineer. It is an activity which is formally carried out by an independant tester.

b) Basically specs don't really get firmed up until near the release date. This sadly is a fact of life. All one can do is have a process in place that can handle it & at least show accountability (so you don't get dragged over the coals in the post mortem).

c) As you have implied a "softly softly" approach is required here. Basically everyone has to be aware that there are trade-offs of features, delivery date & measurable quality. I only mention "measurable quality" because it is possible to deliver software, say

1 week in advance, but it may not have been subjected to a degree of verification. If you're producing a product that has a lower customer acceptability threshold, then this may be OK.

Ken.

+====================================+ I hate junk email. Please direct any genuine email to: kenlee at hotpop.com
Reply to
Ken Lee

That's a fair statement, but with no or a lack of a formal quality system, I can't see how a claim of any "measurable" quality can be made about their deliverable or product. A product with "trust me, she'll be right" credentials, I'm sure has limited appeal to many customers and is a difficult sell when trying to win new contracts.

Ken.

+====================================+ I hate junk email. Please direct any genuine email to: kenlee at hotpop.com
Reply to
Ken Lee

On Sat, 08 Nov 2003 00:23:22 GMT, snipped-for-privacy@noname.com (Ken Lee)

Given the complexity (impossibility?) of fully testing embedded real time software what level of confidence could be placed in any QC system?

Mike Harding

Reply to
Mike Harding

This is a classic case of the tail wagging the dog and unfortunately an all to common result of introducing a 'quality system'. Thay should be controlled by you not the other way round. So the system should be you send her a memo which says please add this paragraph to this document, taking out all the necessary numbers in the process. Report back to me when this is done (and anyway I would like in done in the next two days).

Ian

Reply to
Ian Bell

I did not say we have no quality system just the we have no 'formal' quality system. We have a quality manual, we write quality plans, we have software engineering guidelines and in fact guidelines for all aspects of development including electronics and mechanics. However, none of it is mandatory and because of the wide range of project types we indertake it would be quite wrong to impose a rigid system on every type of project. This means the the method of ensuring quality is decided and agreed with the client at the proposal stage.

On the contrary, as I mentioned in my very first post on this topic, the most important factor is the people and the culture in which they operate. This is a far better means of ensuring quality than a rigid system which simply frustrates people. So in some cases clients are quite willing to accept 'trust us, we have done this sort of thing before, our reputation rests on our success, it'll be alright' credentials. After all you are only as good as your last job. if you screw up, word soon gets around. As far as winning new contracts with this approach is concerned, we would only take this approach if the timeframe was extremely short in which case any competitor with a rigid QA system would simply be unable to meet the requirement.

Ian

Reply to
Ian Bell

An I would debate the 'measurability' of quality anyway.

Ian

Reply to
Ian Bell

In article , Mike Harding writes

Why is it impossible to fully test an embedded real time system?

/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\ /\/\/ snipped-for-privacy@phaedsys.org

formatting link
\/\/ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/

Reply to
Chris Hills

"Chris Hills" schreef in bericht news:bqrUJYADmPr$ snipped-for-privacy@phaedsys.demon.co.uk...

Can't even fully test hardware. You can only test to the fullest of your abilities ;)

--
Thanks,
Frank Bemelman
(remove 'x' & .invalid when sending email)
Reply to
Frank Bemelman

Because the number of possibilities to test is exponential with each feature. example - 2^4=16, 2^16=65526, ...

If you have a small number of features, it maybe possible to completely test. However, it doesn't take a large number of features before complete testing is impossible.

Reply to
Arnold

We long ago recognised this so we now agree a set of major and minor features with our clients. We test all major and minor features individually and all combinations of major features. This limits the tests to a manageable subset of the impossible total and picks up all but the most obscure bugs which the client agrees to pay to be fixed.

Ian

Reply to
Ian Bell

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.