Would the interface allow me to easily create, say, a "soft oscilloscope", i.e. the equivalent of this "hard oscilloscope":
formatting link
That includes: - drawing graticule - drawing textual annotations - drawing individual "random" pixels of different colour and brightness - drawing soft buttons (plus notifying my code when button clicked)
pio_out(RED_LED, 1); pio_dd(SERIAL_DATA, 0); /* bidirectional data pin */
...And so forth.
When it involves more than a simple operation, I find you end up having to do things in an application-specific way anyway. So to take your example of an SPI driven I/O expander, usually I would update this periodically rather than every time the application writes a bit. Yes, your mileage may vary but that is the point, you are likely to end up having to rewrite it each time in reality.
[...]
I found it an interesting idea, and thanks for posting it.
But a bit daunting to comprehend unless one is very well versed in c++ template metaprogramming, which I am not. I am reading the latest "The c++ programming language" (c++11 based). So I will look again after that.
An A/D converter that returns more that 31 bits? IMHO a merger between Google and Microsoft next year is more realisitic!
That aside, I think that could be solved with template specialization.
That falls under the 'no abstraction can cover 100%'. A separate (upwards compatible) abstraction could cover that, but IMO it is not worth it to trouble the users of the most frequnet case with that option.
Note that using just a few resolutions is covered by the abstraction if the A/D converter offers the template itself instead of one instantation.
Yes, this one does.
No, that is part of the details I left out.
Yes, always. That can not be avoided. But it does not force you to use the abstraction for the A/D because you use the abstraction of let's say the LCD. When you want to use the hardware to its limiot youn will nearly always have to go 'native'. But it is not often that a design uses all its hardware to the limit.
Anyway: thanks for a few new ideas - this is the sort of discussion I was hoping for.
Forgot to add: if I use another library/API, can I predict whether your library will/won't destructively interfere with the other libraries?
Last time I used C++, which was a /long/ time ago, that was a serious concern.
Another point: will your library work with my C++ compiler? Given the evident difficulty of producing /complete/ (and hopefully correct!) C++ compilers, that's not a theoretical concern.
That's not what your code says! static int ad_get(); I assumed your choice of "int" -- instead of "long" -- was deliberate; to allow the library to use the preferred word size of the machine instead of forcing a 32b result for ad_get().
BTW, 24b ADC's are widely available; 32b devices are also available: There's a fair bit of hand-waving, there, but you *can* get a 32b result!
I guess Google *and* Microsoft ARE merging!! :> Poor google...
And with integration, there's no theoretical limit (though there are practical ones).
But you're going to find lots of cases where your abstractions don't fit. Serial ports that have different modem control signals available (from one port to the next), serial ports that are send-only, PWM's with gating functions -- or without, dividers that have minimum and maximum values that aren't '0' and INTMAX, etc.
What if the LCD and ADC use some common piece of hardware? E.g., a simplex LCD where the backglass frequency is derived from a timer that also runs the converter?
I beg to differ. Especially on resource starved designs, the available hardware resources find themselves exploited heavily. "I can use this spare counter as an edge triggered IRQ input" "I can use this spare DIO as a flow control signal for the UART" etc.
Yeah, but you *expect* The Cold! (I actually miss it -- when I am prepared for it! -- especially the fruits)
I tend not to reuse code as much as *designs*. "How did I do this the last time" instead of "where is the *code* I wrote last time" And, more importantly, what did I *learn* the last time so that I can make improvements *this* time.
"Inertia". Keeps you from straying too far, too fast! :>
[I actually *may*, soon, have the workspaces cleared off enough to send you some pix. Of course, "cleared off" is not a stable state -- I expect things to change almost instantaneously thereafter! ]
There is very little metaprogramming involved in th basics. My interface for an open-collector/drain input/output pin is (leaving out initialization and type identifification) just
A rational decision, provided either: 1 I only need to use generic functions, not those that make full use of my peripheral's capabilities, or
2 I can "go around" or ignore your library for some subset of the peripherals (can be difficult where several peripherals are accessed via a single register), or
3 your libraries' algorithms are sufficiently complex that it is simple for me to ignore the library, or
4 I need to "nearly repeat" an earlier design's functions in a later design (so that I can amortise the learning curve)
To me, (3) seems the most probable to be of interest to me
Drawing lines is definitely /not/ trivial, except in special circumstances (e.g. there is an n:1 mapping (n an integer) between the pixels in my application's space and the screen, and the lines are horizontal or vertical). In many cases there are unpleasant "gotchas" associated with the "corner cases" (no pun intended); ask any numerical analyst!
I didn't mean to imply that one pixel in my user space maps to one pixel on the display. In one current application I'm working on the user-space pixels map to many screen pixels and the user- space pixels may or may not appear square depending on the magnification. Indeed in come cases they appear as a rectangle with a 10:1 aspect ratio :)
Um, the G in GUI means "graphics", so I'm not sure I understand you :)
Agreed, but you asked about drawing a graticule, which is (if I understand correctly) the raster of lines and then some more that make up the 'silkscreen' on the O'scope's tube.
Drawing a line should certianly be part of the interface.
Graphics = output to a screen. When you add the UI you somehow use it as input to, directly via eg. touch, or indirectly by eg. a mouse.
Correct, but it is not the general case since those lines are also horizontal/vertical.
The only interesting part is how to merge the colour of the graticule with the colour of the user-space pixel "behind" the graticule. Not difficult, unless if the API makes it difficult.
Agreed, but I thought you mentioned GUI. But who cares; that is a boring point and neither of us can be bothered to excavate the archaeology of this thread!
I've been using an implementation using classes with virtual functions for that, in programs from bootloaders to application programs (well, actually I did a little template magic to implement vtbls "by hand", so I can control when and how things are constructed). But effectively, I have classes class InputPin { virtual int get() = 0; }; class OutputPin { virtual void set(int) = 0; }; and their descendants.
Your fully template-based approach looks neat and appropriate for things with as little "meat" as an I/O pin, but for more complicated things like "SPI transaction", "parallel NOR flash", "NAND flash" I'd like to know where in the object files my code ends up.
Plus, an I/O pin may end up to be more than just read-bit-from-register: for applications that occasionally read a pin, I've got an implementation of the InputPin interface that performs a remote procedure call into the driver, saving the application from having to map physical memory.
"The hardware circuit is generally fixed" is one of the biggest lies of embedded software development :-)
At least I didn't yet encounter a project where hardware assignments didn't change over time. Pins get moved, get inverted, new flash chip, etc. So it's good I'm able to adapt by changing a (runtime) initialisation.
I'm paying one virtual dispatch per access. So, I wouldn't want do to bit-banged SPI or IIC with my drivers. Thank god I don't have to :-) It's probably not appropriate for 8-bitters, but it's efficient enough to be useful in production bootloaders with a few k code.
Oh expecting it does not make it any nicer. When we see the apples go reddish in August we are reminded that "winter is coming", you know. We both hate it, how can you possibly miss the cold :-) .
Well yes, of course the lessons learned are in one's head so they come first, then comes the archive. Above I was referring not to recycling code I have written, I rarely if ever do it; it is more a pileup thing. DPS has grown a lot - it did have all these windows, file systems at low latency etc. for a long time but it keeps on getting better [for example at some point the mouse being moved over things started to have effect (e.g. text rectangle with the meaning of the hieroglyph you are at for a few seconds), most recently those long file name directories I told you about coexisting with the old ones etc. etc. ].
Hmm, I guess this is useful only up to a point :D . Once inertia becomes too dominant in what we do it gets obvious why we have the lifespan timer designed to tick inside us :D :D .
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.