Software Engineering: Art or Science?

Except for hard DSP applications, or realtime closed-loop control, software design seldom has a mathematical (or even theoretical) basis, and no predictive theory is used in software system design. People mostly just write code based on experience, and then try it out. This certainly isn't science, and barely qualifies as engineering.

It would only be art if the programs were beautiful, but they're usually ugly.

John

Reply to
John Larkin
Loading thread data ...

I think the word you are looking for is craft.

Robert

--
" 'Freedom' has no meaning of itself.  There are always restrictions,
be they legal, genetic, or physical.  If you don't believe me, try to
chew a radio signal. "

                        Kelvin Throop, III
Reply to
R Adsett

IMO, a very simple and probably wrong definition of art.

Like anything people create, programs may be craft, art, science or just crap. Most of the time, I create crap... but there are those few highlighted moments.

Harald

--
For spam do not replace spamtrap with my name
Reply to
Harald Kipp

The bastard son of Win 3.1 and VMS.

I just wrote a simulation program to track the timing and paths of ions moving through time-variant electric fields. I used PowerBasic for DOS, with full graphics. It works fine under XP. DOS apps that do serial i/o seem OK too. I've always thought that '98 and up were damned fine DOS platforms.

'98 allows i/o port access from a DOS app, and 2K/XP can be hacked with 'totalio' or similar add-ons.

John

Reply to
John Larkin

I don't know where you have experienced software development but it certainly is not that way with me. I do have a theory, ahead of writing code, of what I want achieved by the code, how it will sit on the hardware and also a risk and reliability assessment down to the module level. I often write the definitive description of what is required of many of the sub-routines (certainly the upper abstraction layers and the hardware interface layers). From this attention to detail I can certify that the code does exactly as required as specified by that definitive description (glossary text).

If the code starts looking ugly you have taken a wrong direction somewhere and should go back and re-think. Robust code is most often simply elegant and beautiful to behold. I think that applies in most languages and is not just a Forth thing.

--
********************************************************************
Paul E. Bennett ....................
Forth based HIDECS Consultancy .....
Mob: +44 (0)7811-639972 .........NOW AVAILABLE:- HIDECS COURSE......
Tel: +44 (0)1235-811095 .... see http://www.feabhas.com for details.
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************
Reply to
Paul E. Bennett

Speaking as a "code cowboy" who's done his engineering mostly as a lone-wolf programmer or member of a small number of programmers in a tightly-funded startup, with all the trimmings of impossible deadlines, zero funding, and moving market targets, I agree. Even those of us prone to a bit of Yee Hah! can approach software development scientifically - even if the bureaucratic approaches that involve vast paperwork and endless meetings favored by many Big Organizations make my blood freeze - and, more importantly, just ain't gonna happen.

As someone whose often among the first engineers in a startup, my approach is to have a few processes rigorously adhered to. Even one or two person "teams" should have the following:

  1. Source code control (obviously). If it is "transactional" (ie, permits multiple file submits in one ingress into the source code control system), so much the better - I've found plenty of hard-to-find bugs by rolling forward change-logs in source code control systems until a bug appears. (Also, #include ) Natch, everything should be there, including specs, notes, marketing collateral, manuals, build scripts, etc...
  2. Good design with API's graven in stone, including reasonably precise documentation. In a startup, good coding fences make for good neighbors, particularly when programmers have to work at warp speed - and, by necessity, are often domain experts in different fields - so peer strategies like code reviews are of limited utility.

API's should be designed before much code (if any), is written. API's should be stub implemented and integration done early as well, so that gotchas in API design can be isolated and rooted out before they break the world. (Stub-implementation of API's has the additional useful property of permitting "real" demos.)

  1. Don't be afraid to toss throwaway demo code that has to be cobbled up early to get funding. IMO, this causes tons of problems in startups: the fact that a demo appears to trivially function confuses management into thinking that there's more there than there really is, and the hacked-together throwaway demo source becomes the codebase for the Real Product. This must be avoided at all costs, even if means getting into the face of management. My own take: if management isn't willing to allow at least some level of proper engineering to be done, the startup isn't going to fly anyway.
  2. API's should be testable in isolation. This one is hard, and takes precious time, but is worth the effort in having measurable progress of each piece of the system.
  3. Test early, test often. Early on, one should probably be putting as much engineering effort into the test infrastructure as one puts into product development. Frankly, you can't deliver a deadline until you have a test environment worthy of the name so you can see what's working, what's still broken, and what still needs to be implemented. Note that if performance matters in the product, one will need both "correct answer" testing as well as timed testing.
  4. The only Rational tools I'll plug are Purify (memory leaks and corruption) and Pure Coverage (for coverage analysis). Having a well-covered testsuite pass and running purify-clean with good coverage means you have a solid product. I also like gprof for profiling - it's especially good for cleaning up performance issues.

Greg Kemnitz, Programming Cowboy :) snipped-for-privacy@yahoo.com

Reply to
Greg Kemnitz

... snip solid advice ...

Much depends on what you mean by API. Of course you have to design the interface between modules before implementation (and as far as I am concerned any OS is just another module), and once done changes in such interfaces are grave decisions. But the important thing IMO is to do the design top-down. That will do the initial partitioning, and the interfaces will develop naturally from the demands.

--
Chuck F (cbfalconer@yahoo.com) (cbfalconer@worldnet.att.net)
   Available for consulting/temporary embedded and systems.
     USE worldnet address!
Reply to
CBFalconer

If you're like me & are a druid, tone-deaf and totally colour un-coordinated, then I'd start to rely on software metric tools such as McCabe's Cyclomatic Complexity index. Also it's sometimes difficult (and possibly un-diplomatic) to criticize a team member's "beautiful" code --- it's more palatable to let a utility be the "art" critic. Best applied to team members who have "acceptance" issues and a large collection of handguns & "home protection" appliances ;-)

Ken.

+====================================+ I hate junk email. Please direct any genuine email to: kenlee at hotpop.com
Reply to
Ken Lee

"Ken Lee" schrieb im Newsbeitrag news: snipped-for-privacy@News.CIS.DFN.DE...

One of the easiest metric is the number of warnings when using -wpedantic with gcc. I often wonder how people can write code where a warning appears on every second line or so...

We use quite a lot of 3rd party software, that falls into that category. At another company I had to fix a project. All compiler warnings were switched of because they wouldn't find the errors in the output anymore ! I think you get the idea about the quality of that code.

Cheers

- Rene

Reply to
news.ip-plus.ch

As a metric measure, I guess one could monitor the number of C language non-conformances, say per week (or month). Most compilers, like GCC do a moderate job of this at best. One should use a proper Lint tool to get a more comprehensive coverage. Where I work we don't track Lint output, however we do assess the Lint output at code inspections. On a monthly basis, we do look at the McCabe Index and the delta size of all components. The defects database is reviewed by the team leader basically everyday and usually trended every month.

+====================================+ I hate junk email. Please direct any genuine email to: kenlee at hotpop.com
Reply to
Ken Lee

I agree. I usually follow what I call the "Christmas tree approach" and design the gateway API's to the main modules of the system initially, after figuring out the "galactic" questions of the high-level design. Then, the first piece of code I'll write is the error subsystem - which, of course, was carefully and completely designed :) (I'm not sure why error handling is still approached as an afterthought in so many designs...)

After this, the gateway API's are completely coded as stubs which report error and somehow indicate "Hey, I'm not implemented yet!". Once this is done, the "ornaments" (ie, working code replacing the stubs) are hung on the "tree".

The "Christmas tree approach" has the useful properties of a good top-level partitioning, a quick shot at "something that appears to work" - which is both useful for progress demos as well as getting integration done as early as possible with external subsystems, if this is relevant. It also gives something "real" to build the test fixture/infrastructure around so it can be built, tests can be written, and testing in general can begin as early as possible.

Greg Kemnitz snipped-for-privacy@yahoo.com

Reply to
Greg Kemnitz

Hi Greg,

I like the Christmas tree analogy. When you are waiting to get your hands on the hardware it is a very useful technique that allows you to prepare the rest of the application. It would be helpful if you have the means to certify the interfaces of the stubs to be correctly functioning and reporting the errors properly (minimising the chances that you are fooling yourself).

With an appropriate level of pre-specification and technical review your Christmas tree approach could be considered as part of asound engineering process.

--
********************************************************************
Paul E. Bennett ....................
Forth based HIDECS Consultancy .....
Mob: +44 (0)7811-639972 .........NOW AVAILABLE:- HIDECS COURSE......
Tel: +44 (0)1235-811095 .... see http://www.feabhas.com for details.
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************
Reply to
Paul E. Bennett

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.