What's more important optimisations or debugging?

I'm trying to get a feel for what people now consider more important, optimisations or debugging ability. In the past with such tight memory limits I would have said optimisations but now with expanded-memory parts becoming cheaper I would think that debugging ability is more important in a development tool. It's is not exactly a black and white question, debug or smallused, but more a ratio. eg. 50% debug/

50% optimised or 70% debug/30% optimised, etc.

Any feedback would be greatly appreciated.

Reply to
rhapgood
Loading thread data ...

I don't understand the question. Debugging features in a development tool or extra debugging information in executable code?

Programming is an art. Not only is it not a black and white question, but percentages don't make sense either. The style of a product depends on the circumstances and the arbitrary preferences of the a-holes involved in its development or its use. There is no right or wrong.

Reply to
BubbaGump

Whatever you do, you will be screwed.

VLV

Reply to
Vladimir Vassilevsky

The question refers to debugging features in a toolsuite. When choosing a set of tools for a particular project would you place more emphasis on finding a compiler/IDE combination that makes for easy accurate debugging or finding one [compiler] that can produce the most optimal code? Alternatively you may go for tools that do neither exceptionally well but do both in an 'OK' manner.

I know it's more complicated than it seems but sometimes you have to step away from the details and just look at the big picture, that's what I'm trying to do here.

Reply to
Ryan H

The rule is "Make it right, _then_ make it fast." Fast enough is fast enough. If the optimizer makes your code undebuggable, and you need the debugger, don't use the optimizer.

That said, I generally set my compiler to optimize for space. It hasn't really caused me any debugging troubles in at least 5 or 10 years. Of course, most of my debug activity resembles inserting printf statements rather than stepping through code in an emulator. YMMV.

Regards,

-=Dave

Reply to
Dave Hansen

Depends. What's more important, time-to-market or development cost?

If it's a totally new class of widget, then the most important thing might be getting first to market, in which case go for easier debugging and don't spare the engineering costs.

If all you have to offer is a cheaper version of the same old widget, then engineering and manufacturing costs matter, so write it in assembler for speed and size. On the other hand, figure that the cost of memory is going down too, so your competitors can reduce their memory costs simply by waiting.

--
	mac the naïf
Reply to
Alex Colvin

... snip ...

Besides which an application with bugs in it is unusable until the bugs are removed. You can always change the optimization level applied.

--
 
 
 Click to see the full signature
Reply to
CBFalconer

That's suggesting a strange trade-off ?. Compiler quality and Debug quality are not on a trade off see-saw.

Indeed, they can sometimes be chosen quite independantly.

*Key Point* No point optimising that which does not work.

Certainly, code ceilings are less of a problem today, than in the past.

Most modern uC have quite good On-Chip Debug support, so someone starting a 'white sheet' new design, should look for devices with this level of Debug support. [See a recent thread about 'live' access to memory during debug ]

As your project matures, and cash flow improves, you can also afford better compilers - if you find you really do need them.

-jg

Reply to
Jim Granville

Remember Knuth's golden rules about optimisation:

  1. Don't do it.
  2. (For experts only) Don't do it yet.

That applies to hand-tuning of the source code, rather than automatic optimisations in a compiler, but it's important to remember that the speed of the code is irrelevant if it does not work.

In my experience, it is often much easier to debug code when you have at least some optimising enabled on the compiler. Code generated with all optimisations off is often hard to read (for example, local variables may end up on a stack, while register-based variables can be easier to understand).

mvh.,

David

Reply to
David Brown

In news: snipped-for-privacy@i13g2000prf.googlegroups.com timestamped 30 May 2007 17:38:39 -0700, Ryan H posted: "On May 31, 9:21 am, BubbaGump wrote: > I don't understand the question. Debugging features in a development > tool or extra debugging information in executable code? The question refers to debugging features in a toolsuite. [..] [..]"

Has anyone experience or impressions of debuggers which allow stepping backwards in time through program flow, such as apparently provided for desktops/workstations by UndoDB (

formatting link
) and Java (Virtual Machine?) debuggers? If so, for which processors and with which tools? I imagine it would be possible to pay Undo Limited to port a version of its debugger which would be compatible with any of the targets supported by the GNU DeBugger GDB as UndoDB is a wrapper for GDB.

Curious, Colin Paul Gloster

Reply to
Colin Paul Gloster

Many debuggers can let you look back in time to some extent, by looking at the call stack - this will let you see what called your current function, and the state of local variables in the caller function (and its callers, and so on). But to get true backwards stepping, you need very sophisticated trace buffers. Some processors, combined with expensive hardware debuggers, can give you limited traces (such as indications of program flow), but a full trace requires capturing the databus and all internal data flows. It's easy to do in a simulation, and possible to do in a full hardware emulator, but impossible with modern jtag-type debugging.

Reply to
David Brown

I don't agree with this. For small programs it is easy to implement an efficient algorithm immediately rather than start with an inefficient one. It's hard to improve badly written code, so rewriting it from scratch would be better than trying to fix it.

For large programs it is essential that you select the most optimal architecture and algorithms beforehand, as it is usually impossible to change them later. The bottlenecks are typically caused by badly designed interfaces adding too much overhead.

In my experience well designed code is both efficient and easy to understand, so it wouldn't need optimization (apart from fine tuning). In other words, if you *need* to optimise an application, you got it wrong.

optimisations in a

irrelevant if it

Yes, a debugger is really only required if you have a nasty pointer bug overwriting memory etc.

some

is often

register-based

Indeed turning off all optimization makes things impossible to debug on some compilers. I prefer leaving on most optimizations as well.

Wilco

Reply to
Wilco Dijkstra

What you are describing is what I would image all experienced software engineers do. But then what do you do if performance isn't good enough? i.e. wrt the design - you got it wrong. I suspect that's when Knuth's golden rules kick in. I could of course be barking up the wrong tree - I haven't read Knuth.

Regards,

Paul.

Reply to
Paul Taylor

As with all such questions, the answer is that it depends. In this case, it depends on the particular project. Is it a quick-n-dirty hack on a platform you've never worked on before, but can assume is amply powerful enough for the job, so optimal code doesn't make a difference, but getting the job done quickly would? Or is it a tight squeeze of a hard problem into a small controller you already know, where you need all the help you can possibly get to make it fast, deadlines be banned?

As any handyman could tell you, it's primarily the job that decides what tools you need, with personal preferences a distant second. The best hammer money can buy won't help you turn a screw.

And then of course, there's always the remote chance that you could get a toolchain that's damn near perfect both at debugging *and* at optimization.

The only big picture to be had here is that there is no such thing as a big picture. The world of engineering consists entirely of small pictures.

Reply to
Hans-Bernhard Bröker

I remember there being a third:

  1. Before you do it, measure.

While that latter statement applies rather widely, let's keep in mind that this is the embedded programming newsgroup after all, where real-time constraints are a regular old fact of life. That means speed and correctness may not be separable just like that. Sometimes, if code is slow, that alone means it does not work.

Reply to
Hans-Bernhard Bröker

  1. Debug
  2. Debug
  3. Debug

If it's slow you will have a chance to fix it. If it's broke most customer will have moved on.

But the choice is yours.

gm

Reply to
GMM50

The above rules refer to premature optimization, which is allowing efficiency considerations affect the design (in a presumed negative way). A similar quote is "premature optimization is the root of all evil". Neither is Knuth's, the golden rules are Jackson's, the other is Hoare.

However my point is that you precisely have to design for efficiency as it is not something you can add at a later stage. And efficiency matters a lot in the embedded world.

Back to your question of what to do when things go wrong. At that point you've got no choice but to optimize in every possible way. I "optimized" a 600K line application by compiling it with the best compiler money can buy and carefully choosing the optimal set of compilation options. There is not much else one can do once all hotspots have been removed.

Wilco

Reply to
Wilco Dijkstra

What about compiler/assembler optimisations that the toolsuite performs? Some compiler optimisations can make it quite difficult for the debuggers (variable watching, stack tracing etc), is it worth turning these optimisations on if the debug accuracy and ability is compromised?

I'm noticing there seems to almost be a generational gap, some more experienced developers never had access to great debug tools and have learnt to live without, but some newer developers expect flexible debugging facilities. This could be because they are developing on a PC (features galore) before targeting more restrictive embedded devices. Interesting.

Reply to
Ryan H

Same answer as before: that depends entirely on what the critical aspect of the project is. If you need to develop quickly more importantly than achieve utmost speed of the generated code, by all means disable any optimizations that get into the way of debugging. If you need to get the code to run faster no matter what, screw elegance in debugging and suit up for some hard, dirty work. If all else fails, debug by inspecting the generated machine code, then fixate the result of that effort (i.e. replace the critical parts by a known-good assembler subroutine).

And that's before we consider coding rules such as NASA's standing rule: Debug what you fly, nothing else. Or the fact that the focus can differ between development speed and runtime speed even within a single project.

Of course there is. That generational gap is as old as time. Experienced hunters almost certainly felt the same way about those pampered youngsters who think of themselves as "hunters" even though they never 'properly' learned to hunt equipped with nothing but a sharpened stone, within years after bow and arrows had been invented.

The real problem is not what those newer developers expect. It's that some of them actually *rely* on such comfortable tools. That will bite them in the private parts rather badly if they ever have to work in a more restricted environment.

Reply to
Hans-Bernhard Bröker

IMHO, any embedded designer should be sufficiently fluent in the actual assembler language of the target system in order to be able to put the breakpoints in the correct positions and interpret the results.

Paul

Reply to
Paul Keinanen

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.