C++ hardware library for (small) 32-bit micro-controllers

(posted to comp.lang.c++, comp.arch.embedded, yahoo-groups/lpc2000)

I am working on a portable C++ hardware library for real-time applications on small (but still 32-bit) micro-controllers. My typical (and current only) target is the Cortex M0 LPC1114FN28 (4k RAM, 32K Flash) using GCC. Efficiency is very important, in run-time, RAM use, and ROM use. The typical restrictions for small microcontrollers apply: no RTTI, no exceptions, and no heap (or at most just an allocate-only heap).

So far I have found nice and efficient abstractions for I/O pins and I/O ports (using static class templates), implemented those on the LPC1114, and used these abstractions to implement protocols (I2C, SPI) and interfaces for some external chips like 74HC595 and MCP23017.

Like any would-be author of a serious piece of work I am looking for an adience to read, test, critisize (ridicule if necessary), maybe even contribute, and eventually use my work.

To get a idea of what I want to make, a very initial version can be found at

formatting link

Any takers?

If this is all too abstract, I have a more concrete question, mainly for C++ architects. For I/O pins and ports I use class templates with (only) static functions. This matches well with reality (the hardware circuit is generally fixed), and is much more efficient than using inheritance and virtual functions.

However, I am now working on graphics for small LCD screens. At the lowest level (configuring an LCD driver for the size of the LCD and the pins used) the static-class-template approach is fine. But at some level the higher abstractions using the screen (for instance a subscreen, button, etc.) must be more dynamic and created ad-hoc. So somewhere (probably at multiple levels in the abstraction tree) I must make a transition from static class templates to the classic objects in an inheritance tree with virtual functions. Does anyone have any experience with such a dual hierarchy? One very stupid but basic problem is naming: a template and a class can't have te same name, so when the natural name would for instance be 'subframe', who gets that name, the template, the class, or neither?

Wouter van Ooijen

Reply to
Wouter van Ooijen
Loading thread data ...

Is it the case that the same /compiled/ software has to be used with different hardware?

If not then simply using separate compilation (sort of "indirection at compile time") should suffice for the necessary decoupling, at the cost of needing a specific build for each supported hardware mix.

You then link with the compiled classes that are specific to the relevant hardware items.

The Java approach where the compiled code is used with various hardware, is to have an object factory and then using an object with virtual member functions. That can still be done without a heap. But (1) it implies needlessly having available all the "drivers" (whatever, name) that will not be used for this HW, and (2) chances are that you can avoid this, as outlined above.

Use namespaces?

But again, you can probably avoid all that.

Cheers & hth.,

- Alf

Reply to
Alf P. Steinbach

  1. Portable hardware is myth.
  2. Universal solutions and modular designs don't work.
  3. Trying to cover everything instead of doing particular task is waste of time and effort.

Useless. Abstract not I/O operation but function that it does.

#define RED_LED_ON SETBIT(PORTA, 7)

No problem. Everyone been there and done that. It is just the need for discipline and formality in low level code that you realized.

How about life without C++ ?

QT 5.x has ~100M runtime. And it is slow, too. This is price of GUI portability.

Accept whatever style and stick to it. It doesn't matter as long as you are consistently following your design rules.

Reply to
Vladimir Vassilevsky

In the late 80's I designed OpenUI, a cross-platform UI toolkit with interpreter for an embedded language, and asynchronous rich messaging IO both internally and across the Internet. It ran the new international trading system of NASDAQ for ten years, initially on 486/66 PCs with 8MB ram, with Windows 3.11. Equivalent versions ran the exact same apps on character terminals, X11/Motif, Macs, OS/2, Windows NT, etc, on large enterprises around the world, including some of the first Internet (pre web-browser) banking and share trading apps.

The entire engine fit in 1MB, even though it was written in C++.

Not looking so smart now, are you Vladimir?

A tool is only as good as the people who use it. Especially a sharp tool.

Reply to
Clifford Heath

Good for you. So what?

[...boast snipped...]

What are you arguing to? What point are you trying to make?

VLV

Reply to
Vladimir Vassilevsky

I have a lot of hardware that I can carry if I want to :)

Seriously (I am not sure you are, but I'll be), a lot of hardware-related code can be portable, except for the frustrating aspect of accessing the I/O pins (and dealing with timing).

I don't think any comment is needed.

First part is true, second part is nonsense, otherwise no libraries would exist or be used. Library design is the art of balancing between doing everything and doing a specific task well.

I've been there, check for instance

formatting link
It works up to a point. It runs into problems

- when setting a pin involves more than a simple operation (BTW, PORTA hints that you use a PIC, in which case this is plain wrong due to the RMW problem!)

- when you need more than one instance of your library (like interfacing to two identical radio modules)

- it uses macro's, which are evil (according to some they are THE evil)

Been there, ran up against the limitations of an assembler, even wrote a compiler. Happy with C++ now. A pity concepts did not make it (yet).

That's not the kind of GUI I am targeting. I'm mainly into microcontrollers, the thingies that count their memory in kilobytes, not megabytes.

The two styles I mention (static class templates and (traditional) classes with virtual functions) are both needed to get a balance between (code and data) size and run-time flexibility.

(to the rest of the word: sorry if I am feeding a troll)

Wouter

Reply to
Wouter van Ooijen

I have a lot of hardware that I can carry if I want to :)

Seriously (I am not sure you are, but I'll be), a lot of hardware-related code can be portable, except for the frustrating aspect of accessing the I/O pins (and dealing with timing).

I don't think any comment is needed.

First part is true, second part is nonsense, otherwise no libraries would exist or be used. Library design is the art of balancing between doing everything and doing a specific task well.

I've been there, check for instance

formatting link
It works up to a point. It runs into problems

- when setting a pin involves more than a simple operation (BTW, PORTA hints that you use a PIC, in which case this is plain wrong due to the RMW problem!)

- when you need more than one instance of your library (like interfacing to two identical radio modules)

- it uses macro's, which are evil (according to some they are THE evil)

Been there, ran up against the limitations of an assembler, even wrote a compiler. Happy with C++ now. A pity concepts did not make it (yet).

That's not the kind of GUI I am targeting. I'm mainly into microcontrollers, the thingies that count their memory in kilobytes, not megabytes.

The two styles I mention (static class templates and (traditional) classes with virtual functions) are both needed to get a balance between (code and data) size and run-time flexibility.

(to the rest of the word: sorry if I am feeding a troll)

Wouter

Reply to
Wouter van Ooijen

Presume you complete that proposal to your satisfaction.

When I start new designs, my thought processes are along the lines of: 1) what needs to be done at the hardware level, e.g. turn LED on/off, sleep, write text, draw rectangle, PWM an output to control a motor, read an ADC 2) what is the level of abstraction that I want to use in my application, e.g. setMotorSpeed, illuminateTarget, measureTemperature displayTemperature 3) what hardware do I have to use, both individual devices and /combinations/ of devices 4) what documentation, code examples and libraries are available that I can more or less cut-and-paste into my code 5) does the library contain complex algorithms or is it merely a indirect access to the peripherals

And then, most critically: 6) what has the shortest learning+implementation curve 7) when something doesn't work as expected, how easy will it be to debug

In my experience, 6&7 mean: - for specific peripherals that have a complex interaction with the main code or where I expect to repeat the task, /and/ there is a specific library available for that peripheral then it is worth my spending time learning how to use it - more frequently I'll just cut the code if /any/ of these are true - if another project would use different peripherals, or - the interaction with the peripheral is simple, or - the library doesn't support that specific peripheral, - the library documentation is worse than the peripheral documentation - the library is merely an "abstract framework" in which I would have to write the low-level guff for the specific peripheral

It is /very/ difficult to pass that filter for devices that have kilobytes rather than gigabytes of memory. It can be done, e.g. a networking stack, but it isn't easy.

Reply to
Tom Gardner

Well the obvious question is in how much the same sources would compile today, using todays compilers/libraries etc.

A few orders of magnitude higher would be my guess (I don't use these compilers so my guess is based only on the results I see as new software etc.). Would it be practical to run your sources through something like that just for the sake of some fun?

The tool user, the tool itself or both can be the limiting factor. I have yet to encounter someone to match my efficiency using any tool compared to me using my (own) tools, for example (because of my tools, not because of me being that better, obviously).

Dimiter

------------------------------------------------------ Dimiter Popoff, TGI

formatting link

------------------------------------------------------

formatting link

Reply to
dp

Thanks, though nothing concrete it reminds me of how a potential user probably looks to my library :)

This is a point were an common interface might be an advantage. (Simple example: an A/D output can have the same interface for an on-chip A/D, an off-chip SPI or I2C A/D, or even a PWM.)

:)

If it were easy I guess it would alrady be done a 100 times.

Wouter

Reply to
Wouter van Ooijen

Hi Tom (and Wouter),

[snips throughout]

-------------------------------------------------------^^^^^^^^^

Let's assume we actually mean "real-time" and not "real fast"...

I'm a hardware person that ends up spending the majority of my time writing code -- because I can quickly craft minimalist hardware designs (minimizing DM+DL) that often require considerable contortions to coax into working in the software! (firmware)

So, I look at a project from the top down, initially (what's likely to be *in* here?) and then jump to the bottom and start looking back *up* (how do I get to *there*?) Then, I choose the hardware that gives me the best bang for the least buck.

Often, this means using "devices" in atypical ways. E.g., using a PWM output and comparator input to build a tracking ADC when a *real* ADC isn't available (or, doesn't have sufficient resolution). So, there's usually very little AT THE HARDWARE LEVEL that I can pilfer from a library or even some other project (other than symbolic names for configuration "bits")

+42

With most embedded devices, functionality is *known* at design time. And, if resource constrained, you don't want to *waste* resources (e.g., memory, CPU) on features that you don't need.

[In some markets, "dead code" is actually a significant liability]

I rewrite my network stacks for each project. There's too much generality in a typical stack to "just port it" to another application (given the constraints above). Especially when you consider the other "support services" that can go along with it (do you really *need* a DNS client? DHCP client? etc.)

Do you need TCP? Or, just UDP? How many connections? Ah, I guess we'll need some timers for those... Do you need to support packet reassembly? Or, can your environment guarantee no fragmentation? Do you really need to support ICMP? Or, can we just ignore those niggling details?

If you want to support a variety of YET TO BE DETERMINED applications atop the network stack, then your hands are tied. OTOH, if you already

*know* what's sitting up there, you can selectively whittle away the superfluous cruft and/or tweak performance accordingly.

The same mentality applies to other hardware devices. E.g., I may opt to reprogram some hardware device *in* an ISR that services that device. Or, perhaps some *other* device. Given your RT goal, I care more about how *predictable* your implementation will be than how *fast*. "Determinism". If I start talking directly to the hardware *around* your library, can I *break* it?

Imagine a hardware device with a write-only configuration register. E.g., you write configuration and read status. So, you may not be able to *read* the current configuration! (this is common)

No problem! You keep a static for each such device that tracks any changes *you* (your library) makes to the configuration. So, if you have a routine that allows you to change some portion of the configuration independantly, that funcion can peek at the "most recent configuration value written" and determine how to update that to reflect the desired changes WITHOUT DISTURBING OTHER THINGS CONTROLLED BY THAT CONFIGURATION VALUE.

Now, assume a developer has need to *violate* your contract with him and manipulate the hardware directly. Will he *know* that you've encapsulated this "static"? Will he know to update it so that *your* functions will remain consistent with his modifications? Have you exported a method by which he *can* modify this static? Will this interface remain part of the contract forward-going?

Don't get me wrong (OP) -- everything you can encapsulate is a win! Just don't be surprised if you find yourself (or others) "unwrapping" significant parts of your work in their applications. If they can't easily work-around your library when they *need* to (because you didn't anticipate some particular need of theirs), then your library will be a hindrance instaed of a help.

HTH,

--don

Reply to
Don Y

Sorry, forgot to ask. The above reads as a small part of a wish list. Those items are amoung the things I do offer (or plan to do, or am writing). If you can think of more items, please share them! And probably the hardest part: I am especially looking for good interfaces.

For instance, my A/D interface is (omitting a few details)

template< int n_bits > struct pin_ad {

static constexpr int ad_bits = n_bits;

static constexpr int ad_maximum = ( 1

Reply to
Wouter van Ooijen

There are at least two ways to look at this problem.

- As the library author, the big question is why would this user need to bypass the abstraction? That points to a problem in the design, or maybe in the documentation.

- For the user, using only part of an abstraction is always risky. It means that the abstraction does not fit your needs, yet you still want to use a part of it? Maybe better throw it away entirely (and tell the author why!).

IMO one way to work around this problem is to use many small, interacting abstractions with clear and simple interfaces. This enables the user to throw away the one or few he does not like, but still use the others. LEGO (at least the old style) and Meccano are to be preferred over Playmobil.

Wouter

Reply to
Wouter van Ooijen

(attributions lost, not my fault)

Yes, but it's a difficult art, and too many people do it badly. I hope that was what Vladimir(?) tried to say.

I used to do it -- badly -- but nowadays I try to fit my code to the design I'm working on, in an elegant way if possible. When I've done similar things in two or three different projects, I stop to see if it makes sense to split it out into a library. At that point I have real world experience.

How this applies to you I cannot tell. Perhaps you've seen enough different hardware already so you can tell what's the common metaphor for most of it.

/Jorgen

--
  // Jorgen Grahn    O  o   .
Reply to
Jorgen Grahn

What if n_bits doesn't fit in an "int" (i.e., the data type returned by your ad_get() method)?

What if n_bits *varies* during the course of execution? Or, if "ad_maximum" is less than your computed value (presumably, you are using it for "something"?)

E.g., I frequently use integrating converters. Their advantage is that I can dynamically trade resolution for speed. It takes longer to get a "more precise" reading. But, I can do so with little extra product cost. And, get

*coarse* readings in shorter times.

What if my converter technology takes a long time to come up with a result? Does your ad_get() *block* while waiting for the converter to yield its result? How do I (developer) ensure my application isn;t penalized by using your ADC interface (i.e., do I have to rewrite it so that it suspends the invoking task until the ADC result is available thereby allowing other tasks to continue executing "while waiting"? What if I don't run an MTOS/RTOS?

I.e., has the presence of your library -- and the framework it imposes/suggests -- forced me to compromise how I would otherwise approach a problem?

Note, I'm not saying it does -- or doesn't. Rather, trying to point out how hardware variations and exploits can complicate trying to cram those abstractions into a generic wrapper. (An ADC is an ADC, right?)

Reply to
Don Y

Most often, this would be because of efficiency. Will your library make it *easy* for the developer to know the (costs* associated with each method invocation? (remember, you spoke of portability... whose target processor? whose compiler??) OTOH, he can *write* a specific value to a specific device address and be pretty sure as to what that's going to "cost" him at run time.

Do you choose to expose the data member that holds the configuration value (in my previous example)? Or, wrap that in an accessor method?

How do you generalize that operation? set_ad_mode() to allow all the bits to be updated? alter_ad_mode() to allow some *virtualized* subset to be manipulated???

Maybe you started to use it and, later, discovered this shortcoming OF THE IMPLEMENTATION. Do you now go back and retract it from your design? Do you patch it? Or, do you perhaps not understand it well enough (hence the reason for adopting the library in the first place!) and *incorrectly* work-around its limitations??

But that assumes you have nice, cleanly partitioned bits of hardware that don't *share* any resources! I.e., each ADC has its own independant control, status and data registers. Each DMA channel, counter/timer, "MMU", etc.

In practice, bits get packed wherever the CPU designer can find some unused space in an "I/O" register. Pad and power restrictions will dictate which *combinations* of "I/O devices" and their respective capabilities can be employed at any given time ("Sorry, the ADC input is not available as that package pin is being used for outgoing serial port data").

Hardware is just too "messy" to expect it to fall into neat little,

*orthogonal* arrangements -- esp as you go *down* in resource availability.

I've faced this same problem trying to "virtualize" I/O devices so "applications" can control them directly (without "privilege"). It only works on very specific processors and with very specific constraints. Each of these issues limits the applicability of a library such as yours.

Please don't get me wrong -- I am not trying to discourage or dissuade you. Rather, trying to point out that there are lots of permutations of hardware out there and trying to force them into a nice, cleanly partitioned view is likely to be disappointing.

Reply to
Don Y

Hi Dimiter,

[We've f> >> A tool is only as good as the people who use it. Especially a sharp tool.

I think this is one of the reasons why so many software people are also "tool designers/builders". Often, off-the-shelf tools are ineffective (or, inefficient) at addressing a particular problem.

Or, are a poor fit for how the user (developer) *wants* to apply them (this is particularly true of tools that *impose* a certain style of usage: do this, *then* do this, and, finally, do *that*).

When I was in school, I earned pin money repairing machines at an arcade (predated the "video game" revolution). One day, I was troubleshooting an old EM pin table (delightful kludges!). An "old timer" (pinball mechanic) came by and started looking over my shoulder.

At the time, I was using a VOM to check coils, relay contacts (they often get highly pitted), bulb filaments, etc. The guy asked me what the meter was, how to use it, etc. So, I showed him:

"For example, to check if this coil is 'open', I can put the leads across it and verify continuity, the approximate nominal resistance for a coil of this size, etc. (most coil problems being obvious opens)."

He reached into his pocket and pulled out a ~18 inch length of wire that looked like it was the first wire ever manufactured in the history of time! Knotted, insulation hardened and flaking off, etc.

He touched one end to the coil I had been discussing, the other end to a nearby *energized* coil (i.e., so he knew V+ was present, there), watched the coil in question pull in and pronounced, "This one's good..."

I.e., for him, the wire was a suitable tool for that job. OTOH, had he encountered a coil that was only partially pulling in, he'd be hard pressed to see the high-ohmic path *feeding* the coil (bad set of contacts upstream) or a supply that was otherwise dragged down.

[There are ways to do this with "just a wire". But, it takes more steps and a different diagnostic approach]

Fitting the tool to its user is the key. If you intend a tool to be used in a different way than the user will *want* to use it, you've got an impedance mismatch :>

Reply to
Don Y

Problems reading your own words?

You wrote: "This is the price of GUI portability". You were wrong. Your example showed the price of using Qt, a price which, incidentally, is incredibly damaging and may explain why Nokia is in decline - so I agree.

It is however not the fault of either C++ or of GUI portability, but of bad design.

Reply to
Clifford Heath

We got switched from a mild autumn into a harsh winter overnight by the end of November... It is not a worst case winter yet (-10 to -20) but way harsher than last year.

Yes, this is a major part of it. Then when the tool/author combination gets around 20 years old more effects can be observed, too. The most obvious one being the fact that you keep on developing the tools/language to suit what you need; I have been lucky enough to not have to throw away much if anything written so far so things do pile up. A may be less obvious one is that having to maintain/remember all that stuff you wrote last 20 years all the time (a few tens of megabytes of sources, about 1.5M lines (non multilined as C :-) in my case) tends to keep you alert and in good shape; I am not sure if this is any less important than anything else really.

Dimiter

------------------------------------------------------ Dimiter Popoff, TGI

formatting link

------------------------------------------------------

formatting link

Reply to
dp

Question for a library implementer developing a library that will work with more than one device: "does your library library expose the union or intersection of all the devices' capabilities?"

Too many libraries don't have documentation saying which set of advantages/disadvantages they have chosen :(

Reply to
Tom Gardner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.