validity of ... reasons for preferring C over C++

Things like: what information does the compiler have at this point? What information would it need to do this transformation? What border cases does it have to handle? This leads to consequences such that it has to generate an exception handling frame here because it cannot prove that this trivial function never throws, that it has to instantiate (and thus has to see) this template here because it cannot assume anything about it being instantiated elsewhere, or that 'std::string s = a+b+c+d;' creates an uncomfortable number of temporaries whereas 'std::string s = a; s += b; s += c; s += d;' does not.

Call it "what happens under the hood" or "how does the compiler think", the idea is the same.

Why not? C++ at least makes it a language feature and a library feature to make my own allocator.

But if I cannot use 'new' and 'delete' the answer simply becomes: "I cannot use the language features 'new' and 'delete'" :-) I also cannot use most of the STL, but that's not a core language feature.

Because on some compilers this is the matter of just flipping a switch, so I can use my already-written classes; on others I'd have to write special exception-less code to avoid that the compiler generates code I do not want.

What else are you expecting? Other memory saving techniques like using bitfields ("enum {...} foo : 8" to save 3 bytes) are not C++-specific, and apply to C as well.

In this particular case, the class went roughly like this: template class Queue { void post(const T& t) { m_elems[m_head++ % N] = t; } T get() { return m_elems[m_tail++ % N]; }

T m_elems[N]; size_t m_head, m_tail; }; (details omitted) where each function generates a few machine instructions only. Adding a void* and a size_t would only have made matters more complicated.

Well, I did that before using it in the next project, where we finally hat infrastructure for making unit tests in the build system :)

Stefan

Reply to
Stefan Reuther
Loading thread data ...

Am 22.10.2014 um 01:39 schrieb Paul Rubin:

Just in case, let me throw in a side note: you almost certainly don't have to go as far as compiling "on" a 32-bit system to achieve that. It will be quite sufficient to compile _for_ a 32-bit system. I.e. all you need is a cross-compiler for a 32-bit target platform (e.g. Win32 instead of Win64).

I don't really think that's true for C++ as a language. It may well be a true statement about people programming in C++, though.

There was a time when compiler technology didn't manage to keep up with the demands of the C++ language specification. Template instantiation created a whole lot of new and interesting problems, which took quite a while to sort out. Linkers in particular have to be a whole lot better than they used to, otherwise the sizes of C++ programs liberally utilizing templates would have exploded beyond all reason.

Reply to
Hans-Bernhard Bröker

Even at O(log N), qsort chews up a lot of stack space for non-trivial N. heapsort is more appropriate in a resource-constrained environment...

Hope I don't find another one of those! Best Regards, Dave

Reply to
Dave Nadler

In an environment so resource-constrained that log N is consequential, you aren't going to fit a non-trivial N anyway. So you may as well use quicksort. ;-).

Reply to
Paul Rubin

Yes, but even better, write quicksort without using actual recursion. (The stack has to hold arguments and return addresses, which even for small log(N) adds up.) An array of log2(memory size) won't be all that big. You do have to be sure to do the partitions in the right order, though.

-- glen

Reply to
glen herrmannsfeldt

Yes, that's normal, use an explicit stack in the code rather than recursion. It's still O(log N) but you don't have to save as much stuff on the stack as recursion would use.

Reply to
Paul Rubin

I agree as far as the *language* is concerned, but the standard

*libraries* are almost useless for a small real-time system. The biggest problem that they were created with the desktop mindset: "memory is cheap and plenty, average performance is what counts, and resource shortage must be met with graceful degradation". This leads to libraries that use the heap and exceptions, and have good avarage but bad worst-case performance. Exactly what you *don't* want on a small rea-time embedded system.

My C++ blink-a-led built for the LPC1114 uses 232 bytes code. OK, that is without heap, exceptions, and global objects, but I don't want those on a small uc anyway.

Wouter van Ooijen

Reply to
Wouter van Ooijen

Um, in this case N was ~900. Stack space is especially constrained in segmented architectures; in this case the data to be sorted was "far". Rather than chewing up stack (or static data) for qsort, use heapsort!

Hope that is helpful, Best Regards, Dave

Reply to
Dave Nadler

Well if you use an explicit stack rather than recursion, you could put it in far memory as well.

Reply to
Paul Rubin

Why would I do that, when heapsort requires NO additional memory??

Reply to
Dave Nadler

It's more implementation effort (if you write it yourself) to code a completely different algorithm than tweak something already in the library, it may burn more code space in the device if something else in the system uses qsort, and heapsort is generally slower. Do you want to control the hardware, or do you want it to control you?

Reply to
Paul Rubin

There are no sorting algorithm that require no additional memory. Maybe O(1) memory with a small coefficient. (On register machines, count the registers, too.)

In some cases, code space and data space are separate, and you might save data space at the expense of code space.

If you do the recursion in the right order, and use an explicit (array in memory) stack, instead of recursive calls, quicksort uses O(log N) for the stack. For embedded sized N's, that should be pretty small.

Next, compare code space to other algorithms, and see which one is less overall. (As above, except when code and data are separate, where the tradeoff might be different.)

-- glen

Reply to
glen herrmannsfeldt

Particularly if the constrained resource is time. Quicksort is O(n^2) in the worst case, while heapsort is still O(n.log(n)).

This has resulted in a number of C and C++ standard libraries using introsort, which initially uses quicksort but switches to heapsort if the size of the largest partition doesn't decay quickly enough.

Reply to
Nobody

In a resource-constrained system you are often very interested in the runtime performance. In particular, the worst-case performance is usually the most important - you have to be sure the operation finishes within a given time. Quicksort is useless in such cases - it has an

means your system can easily work fine in all your tests, but fail with an unlucky choice of data, unless you design it with far more processor resources than you generally need.

Heapsort is much more stable and predictable at O(n.log(n)). Even though it takes longer on average than quicksort, its stability makes it a good choice for embedded, resource-constrained and real-time systems.

Reply to
David Brown

The C library sorting routine is called qsort because it historically used an algorithm called "quicker sort", but these days the algorithm is no longer specified and I don't know what any particular implementation uses. Maybe heapsort for all I know.

The C++ std::sort routine before C++11 was specified to use O(n log n) comparisonson average, but since C++11 it's specified to have O(n log n) comparisons without "average", i.e. presumably worst case, per

formatting link
.

I just noticed this and it might be interesting to look at some of the committee minutes regarding this change.

Reply to
Paul Rubin

Really ?

If you have only a few hundred or thousand bytes of RAM/core even primitive sorting methods, like bubble sort or insertion sort are quite adequate even with a processor with 1 us cycle time. Except for extremely low power applications, I would not bother with more advanced sort algorithm.

Reply to
upsidedown

You snipped the most important part "In particular, the worst-case performance is usually the most important - you have to be sure the operation finishes within a given time."

Sometimes a simple sort /is/ the best choice. I said runtime performance is important - but that is very different from saying you want as fast performance as possible. Real-time systems are about doing the job within the required time, not about doing it quickly.

Reply to
David Brown

My point was that with small data sets, even the worst case performance is very sufficient.

Reply to
upsidedown

Yes, absolutely. But you had taken half my comment and questioned its validity, so I felt I had to explain things a little more clearly.

Reply to
David Brown

Something like Shell sort is only slightly more complex than insertion sort, but has significantly better performance, even for a few as ten elements. If you know you'll never have more than about a dozen items, go ahead and use insertion sort, but the boundary where switching to a slightly more sophisticated scheme (particularly Shell sort), is pretty low. And Shell sort is perfectly reasonably to use up to a few thousand elements (beyond which you should likely be heading for Heapsort).

Reply to
Robert Wessel

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.