Developing/compiling software

Not quite that ideal as yet. The timer pool deals in timer objects with handles, but the uart driver is not so ideal. At present, the programming interface consists of a structure pointer containing everything related to that uart, one item of which is a function code. It's not ideal and needs to be developed further into a more oo call interface.

The most recent project using this set of code had a 4 way rs485 link, with custom protocol, so there are more layers on top of the basic driver to handle line turnaround (port driver), protocol deframing and ack / nack or data responses. The gory structure details become hidden by these. There's also an aux port module used for terminal interface layered on top of the uart driver which provides putchar, getchar, string io and conversion functions. it's called curses.c, as a nod elsewhere :-). There's no rtos, just a simple state machine and multiple interrupt sources to schedule everything.

It oo stuff needs more development, but clients don't pay you for this sort of thing and it has to be done incrementally as time allows. I guess the aim is to have a universal embedded function library, eventually. The code may not be quite as efficient as individual hand coded stuff, but could be very beneficial in many other ways...

Regards,

Chris

Reply to
ChrisQ
Loading thread data ...

As I said (paraphrased) in another post, if you have a development method that works better for you to produce clear, maintainable and efficient code, then that trumps any individual rules or recommendations.

What you have here is a typical trade-off between writing re-usable code and writing specific code. The re-usable code takes longer to develop and test the first time, but will save overall if it is used on many projects. But it comes with a run-time and code space cost.

This all sounds like a good way to produce reusable libraries of code. But there is absolutely nothing here to suggest that global data is a problem. If I understand you correctly, you've just hidden it within your "call interface structures". The effect is really just a matter of making your access to global data a little more formalised and consistent, as well as tidying the data inside neat structs rather than in a loser pile. Application code has access to these structs - it #include's headers that define the structs, and it has instantiations of them. It can therefore access the data directly if it wants.

Reply to
David Brown

The overloading of the 'static' keyword in the C language to modify lexical scope in one case and lifetime in another was a huge mistake IMO.

--
Grant
Reply to
Grant Edwards

Unfortunately, I've seen quite a few projects which were started as "small" and then overgrew into "large". It was too late to change the interfaces, so they ended up duplicating the entire modules by copy/paste and renaming the global variables by find/replace. "Code efficiency" is a very common excuse for sloppy practices.

The notions of "Slow" and "Fast" are meaningless without respect to the particular application. A function either works or not works. If it works, I would do it in the most clear, portable and modular way. If there are the *real* constraints in size/speed, then I may have to resort to globals.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

hehe. Okay. I don't remember the history of it well, but I seem to recall that c derived the use within function bodies from FORTRAN. In other words, the use of 'static' for lifetime came from there. I think that use started at the very beginning. At least, it seems to me it did (and I've been using it since 1978.) However, the use of 'static' for scoping seems to me to have been a little later. (I really could be wrong, here.) And if so, it may have been used for reasons similar to those stroustrup gave for some choices made in c++

-- the desire to keep the keyword list changes to a minimum, in order to avoid breaking code to the extent possible. In other words, practical considerations.

It is what it is.

Jon

Reply to
Jon Kirwan

Since Vladimir mentioned overloaded member functions I expect he means something like:

class UART { public: void Timeout(int amount); // Set UART timeout int Timeout(); // Get UART timeout

};

Reply to
Dombo

Projects that start small and then grow large is a bad sign in the first place. A common cause of this sort of problem is when people take "test" or "prototype" code and designs, and try to turn it into a finished product.

It is always important to understand what you are aiming for in your code. Are you trying to write something specific for one application, or do you want it for general use? Are you trying to write something small and fast, or is that low importance for this particular piece of code? Does it need to be very portable? Does it need to be easily understood by others? You should not write code if you don't know /what/ you are targeting, and /why/ you are doing it in a particular way. Then you can (mostly!) avoid overgrowth issues.

And of course, the use of global variables is a small issue in such cases - if you make a duplicate module with find/replace, you have the same issues with global functions or structures as with global variables.

"Premature optimisation is the root of all evil". However, too many abstractions is just as bad. Pick a happy medium, suited to the code in question.

I use global variables when they are appropriate - the choice can often be because they make the code clearer (and perhaps also more portable and modular). Building fine oo interfaces and other abstractions /may/ make the code better (in the sense of making it clear, portable and modular), but it can also make it worse. If it is done as part of a consistent design pattern, it will help. If it is done simply because someone said that global variables were bad, it will make it worse.

Of course, there is no doubt that code correctness is much more important than code speed, and also that clarity of code (which heavily influences its correctness, especially during later maintenance) is generally more important than raw speed. But given the choice of writing clear, efficient code or clear, inefficient code, I know which I choose.

Reply to
David Brown

I like to think of "static" as making the object in question (function or data) a fixed global object for the lifetime of the program, with a name derived from its closest scope. Thus "static int x" local to function "foo" in file "bar.c" acts exactly as though it were a file-scope global variable called "bar_foo_x".

Of course, that's a little simplification - "static" lets the compiler optimise much better, and you can get yourself in a real muddle trying to figure out the meaning of a static local variable in a "static inline" function included in a header.

The big issue with "static" and C is that file-scope objects should be static by default, and only made public with an explicit keyword (at the very least, an "extern" declaration).

Reply to
David Brown

I agree. Making file-scope things global by default was a mistake. The odd thing is that all of the assemblers I've used did things the "right" way and required a "global" declaration and by default things were file-scope.

--
Grant
Reply to
Grant Edwards

[...]

OK, of course, that makes more sense. I was thinking he might have some neat C way.

--

John Devereux
Reply to
John Devereux

Well, there is, in that you can put function pointers into a structure and dereference through a pointer to the structure.

From what I remember, that's one of the methods used by early c++ to c translators...

Regards,

Chris

Reply to
ChrisQ

I think the same thing applies to most languages. I guess we can blame this idiosyncrasy (or rather, idiocy) of C on the slow Dec Writer keyboards - K&R were more concerned about avoiding keystrokes than on making a good structured modular programming language. The default "int" is in the same category.

Reply to
David Brown

If all you have is a int, everything looks like a hammer etc, or have I mixed that up a bit.

C may not be perfect, but it's kept me in lunch for more years than I care to remember, Still enjoy the challenge and and still learning new stuff every day. What more could you ask for a way of earning a living, with half the world in employment slavery ?.

Everything is a compromise and C was originally designed in the days before computing as it is today. Code generated from it drives a large proportion of the world's engineering, apps, medicine, leisure and more, so perhaps they didn't do such a bad job...

Regards,

Chris

Reply to
ChrisQ

There is a difference between saying the one compiler is better than the other, and saying that the second rate compiler is a poor choice. Statistically, the avr-gcc compiler is used by more than 50% of the AVR developers, and for many, "free of charge" is a much more important parameter than efficient code generation.

Others would like to have a compiler which is not dongle protected due to bad experience with the vendor, and that is another reason to go avr-gcc, instead of IAR.

BR Ulf Samuelsson

Reply to
Ulf Samuelsson

There is a difference between saying the one compiler is better than the other, and saying that the second rate compiler is a poor choice. Statistically, the avr-gcc compiler is used by more than 50% of the AVR developers, and for many, "free of charge" is a much more important parameter than efficient code generation.

Others would like to have a compiler which is not dongle protected due to bad experience with the vendor, and that is another reason to go avr-gcc, instead of IAR.

BR Ulf Samuelsson

Reply to
Ulf Samuelsson

To paraphrase Churchill - C is the worst of all possible programming languages, except for all the others.

When C was designed, there were plenty of other languages that were far safer, better structured, and in many ways more powerful - Algol and Pascal being the obvious examples. There are a number of points where C could have been much better, for very little cost - avoiding implicit ints and making file-scope objects static are clear examples. The lack of a proper interface-implementation separation is perhaps the biggest failing - people /still/ can't agree on a sensible style of how to name headers and C files, and what should go in each file. I suppose C++ shows that C is not as bad as it is possible to get.

It's fair enough to say that C's limitations and design faults are because K&R were writing it for a specific use, and it worked fine for that job. And it's also fair enough to say that any design is a compromise. But when C was designed, other current languages were significantly more "modern" in their structure and safety - C was a big step back in those areas.

Reply to
David Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.