I can hack my way around AVRs OK - but I realise my limitations... I normally program in perl on larger computers, so wasting cycles has never been something I've worried about(!). Nor do I normally do timing sensitive stuff where timing < 0.1s
Can anyone recommend a good book that would cover things like:
1) Compare-nybble (ie is there a more clever more efficient way than AND- mask and compare-word?)
2) Bitstream input techniques - eg efficient sampling of either a self clocking serial stream or a timing critical one (eg 1-wire) where the timings are so small (10's of uS) that one cannot really afford to waste instructions on a 20MIPS device. Use of interrupts and timers...
3) Cool ways to semi-virtualise a timer - eg ideally you'd like 10 hardware timers but you have 2.
4) Bomb proof "boot sector" and live firmware (flash) update tricks.
5) Efficient algorithms for certain maths operations, eg integer square root (as has been mentioned recently, if not here, on a USENET group I'm subscribed to).
6) Keypad debounce.
7) Serial comms (SPI, I2C, 1-wire, RS485, roll-yer-own-with-a-GPIO-pin)
8) Working with an OS, eg FreeRTOS.
And lots more in the same vein...
Although I'm most likely to use AVRs or maybe PIC24s, the book doesn't have to be that specific - just "how to hack around in 8 bits". Pretty much a "Knuth for uControllers".
Many thanks in advance :)
Managers, politicians and environmentalists: Nature's carbon buffer.
You don't need a book for this. First do what is the clearest and easiest to understand and maintain. Then, if and only if that approach really is time constrained does it become appropriate to look for alternate algorithms, micro-optimization, or dropping into assembler.
The usual way is with a peripheral, ideally one that's integrated into the silicon. Serial UART (and variations on that theme), SPI, I2C, and CAN are common. The Dallas 1-Wire might be, I've never needed to look into it. If you need, e.g, just one more serial input then it's not that hard to whiz one up with a timer that has a capture pin. Details differ but most microcontrollers will have app notes for thing like this.
volatile unsigned long Tick;
Vendor-specific. See their app notes.
"Math Toolkit for Real Time programming" Jack Crenshaw, ISBN 1929629095.
Rich Webb wibbled on Friday 05 February 2010 14:15
Yes - I'm happy to use what comes with the chip (usually RS232 and SPI), but as this is hobby-grade stuff, I'm trying to avoid too many special purpose chips, especially if they aren't available in DIL format. As perverse as it sounds, it would be prefereable for me to dedicate another 8 pin ATTiny to an awkward bit banging job than to use a special purpose peripheral, if only because the domain of datasheets I have to read remains smaller :)
 Cheap as chips and I know how it works.
But ideally, if I get a good mental model of handling timers and servicing multiple tasks efficiently, there's probably no good reason why I can't do
1-wire and RF comms on the same CPU.
 OK, I'll concede I will probably use a more intelligent chip for that (eg CC2500) rather than bit wibbling a dumb transceiver.
OK - thanks for all that.
I'll look at that on Amazon.
Absolutely brilliant - that's the sort of stuff I'm after. Brilliant algorithm near the end - totally non intuitive but simple implementation.
Many thanks for your thoughts.
Managers, politicians and environmentalists: Nature's carbon buffer.
The list of micro-optimizations is limitless. :> Some idioms are easily expressed in HLL's while others are best left to ASM implementations (e.g., compare nybble's could exploit a "swap nybble" instruction on some processors).
Usually, these optimizations are only essential in ISR's or in very tight loops where they can materially contribute to overall performance. E.g., "find rightmost set bit".
Look for changes in choice of *algorithm* to give you the most joy.
If events are 10's of usec's (uS are micro Siemens :> ) apart, chances are, you won't be using an ISR to look at them (unless you have a very fast context switch *and* not much else happening in the processor :> ). I had an (ancient) barcode reader application with IRQ's at 75usec and it would visibly slow the system down when you swiped a barcode (though this was on a CPU with 1/10th the horsepower you're mentioning)
Most vendors of these sorts of devices give pointers to how they expect you to implement the interfaces to their devices. Some with dedicated silicon (that they will sell you) or with just an app note illustrating a particular approach.
Ah, perhaps you are misunderstanding how (hardware) timers are typically used! :>
Most "soft" timing requirements are handled by what you are calling "virtual timers". E.g., if you want to "wait" for 5 seconds, you don't "waste" a hardware timer on that. Or, even 0.5 seconds.
Instead, you typically implement a system timer (based on *some* sort of "reliable" timebase -- most often a hardware timer, though not necessarily) that maintains a "system time" and "timing service". "System time" can also drive "time of day" time (i.e., "calendar time") but it need not. E.g., your washing machine doesn't care (well, *need* not care!) what time of day it is; though it does need to know how much time is *passing* as it performs it's wash cycle (e.g., agitate for 5 minutes, then spin for 3, etc.).
A timing service allows your code to reference absolute and relative times (wrt its own "sense of time"; e.g., absolute time 12345 means something only in the sense of when your device considers "time 0" to have occurred).
This is usually closely related to the type of OS, if any, that you implement. E.g., a single threaded system has a lot easier time dealing with things since it *knows* it is the only active object and can be aware of everything that *it* has done. OTOH, a single threaded system has to do everything for itself!
In multithreaded systems, one typically sets aside a thread (which might run completely in an ISR) that manages "time". "Tasks" (processes, threads, etc. -- depends on the distinction your "OS" places on "active objects"; e.g., classic UNIX had process = thread. More modern systems treat processes as "resource containers" and threads as "active objects") issue requests for timing services (pause(), wait_until(), etc.) which are then hooked to the timing service for completion.
One of the most trivial way to implement such a service is to set aside a "variable" to contain the "time remaining" on that particular timer (e.g., you can implement an array of timers or scatter timers throughout memory -- as long as the timing service knows how to get to them). Then, each time the jiffy (periodic timer interrupt) comes around, "decrement" *each* timer, clamping the result to "0".
This appears wasteful but can actually be very efficient (depending on the number of timers vs threads, etc.).
One advantage to this approach is that the responsibility for resuming the thread at the end of the time interval (*or*, WHATEVER OTHER ACTIVITY is associated with timer expiration!) can be "outsourced" instead of requiring the timing service to perform this activity on the threads' behalf. E.g., a thread can sit and *watch* it's timer. So, pause(time) can be
timerX 0) yield()
Looks pretty lame :> But, in reality, the cost of such a spin-wait falls through the cracks as you are just testing a group of contiguous RAM locations for a nonzero value. E.g., even an 8 bit processor can do this quickly by or-ing all of the bytes together and checking for non-zero result.
What isn't readily apparent is how versatile this can be! E.g.,
LED subscribed to).
Google is your friend. Start with Knuth just because he teaches you a good way of thinking about costs and how to look for places that can benefit from optimization (vs. wasting your time saving a clock cycle "here" and three more "there"). My single most favorite class in school was "Introduction to Algorithms (6.033?)" (Saltzer?). I don't think any of the algorithms covered would be considered "introductory" material. But, it showed just how many clever ways there are of looking at a problem differently and how big the gains can be! I am amazed at how often I go looking for my class notes as I will vaguely recall some clever trick that I can apply to a problem at hand -- *if* I convince myself not to look at it the "obvious" way.
Argh! There are myriad ways to doing this (software and hardware). And, many different ways to "signal" the "key" event. E.g., do you signal the key *after* it has been debounced? Or, on initial closure and debounce thereafter? Do you signal on the downstroke or the release? Are you dealing with a key matrix or a single contact closure? What does your hardware interface to the key look like?
Google manufacturers for app notes on all of the above. Note that there are subtle variations on several of these ("SPI", EIA485, etc.). Even EIA232 has many bastardized forms (each with costs, benefits and risks)
That will depend on the OS itself. Note that people tend to play fast-and-loose with OS terms -- especially "RTOS". If you aren't familiar with the subtleties of what distinguishes an RTOS from an MTOS from a "big loop of code", then you are best served doing the academic research before wasting time on the details of a particular "xxOS" (e.g., some so-called RTOS's don't have truly deterministic behavior for all of the services they provide; some only provide simple scheduling options; some fail to implement hooks for priority inversion; etc.).
And, of course, beware the naive assumption that "real-time" means "real fast"! :>
If you can find a copy of the (large) black applications handbook that motogorilla issued for the 6800 (1980-ish?), you would probably get a much better feel for how to think about microcontrollers. In that era, you were dealing with a fraction of a (VAX) MIPS so you truly saw the cost of each instruction you executed.
(sigh) I have tutorials that I have written re: several of these subjects but they are on a machine that has been disassembled (migrating from one machine to another has got to be the geek equivalent of "giving dry birth" :< ) so they aren't easily available. But, I can offer answers to specific questions you might have. I know I have a note on timing services, another describing switch debounce techniques (hardware and software), another on "resource starved" multitasking, one on barcode decoding, etc.
Maybe, when I retire, I will have time to post them someplace! :>
Despite your comments above, your questions below tell you and us that you _do_ have to worry about cycles. It's just that you want to have someone tell you how to not worry about them, once again. In practice, there are often parts of an application where it is important and parts where it is far, far less important. You are encountering this reality.
Maybe we should say you are entering "The Twilight Zone." ;)
Um.. Not likely all in one book.
In terms of c, sounds like all you want are some idioms to use. But why worry about this? Either you are doing this for a LOT of short fields and need the extra boost in performance, in which case it may be better to write a routine in assembly, or else this is a one-time affair and your earlier comment about not worrying about wasting cycles seems to enter back in. So I am not sure why you need to care that much about this. Some c compilers will do better than others, here. Normally, you let the c compiler authors worry about these details. They generally do it well enough for most one-time type uses -- especially in the general situation you earlier described. Which makes me think you are trying to do this fast, perhaps to handle a serial stream of data that is arriving at a fair clip?
You didn't mention the shift operator or the % operator or using bit fields and unions. Have you played with those, yet? Are you familiar enough with looking at the assembly output of your c compiler to know which works well and which does not? (Bit fields aren't always portable, but I assume you are using a single c compiler and stuck on the AVRs, so it may not be an issue for you.)
Almost by your own definition here, it's likely you won't be using c for this for the "timints are so small" part of this question, right? Assembly, yes? Regards self-clocking streams, it is application specific and the best place to go for the answer here will be manuals/whitepapers that talk about the particular method in detail. If the stream is slow enough for c, you just need to follow the description well in writing your code. It's not about some special idiomatic c expression that a general book could fairly discuss.
I assume by this you'd like software routines to execute at certain times you can set and that these times are slow enough that virtualizing the timer works well for you in terms of latency and variability in that latency.
On this point, I could talk at length. However, I'll recommend one very good, old book which has an excellent chapter on the topic of delta queues which solve this problem with very little code and very repeatable, reliable performance. Douglas Comer's first book on XINU.
And if you'd like, I can send a file or two that show one style of implementation details to supplement what you might find in that book.
I think it is "cool."
Normally, I think you need to provide some care in defining what is meant by "bomb proof" and "firmware update." For example, a firmware update might be to a specific driver placed in a specific location. Making that bomb proof might mean that it must be movable. Or not.
I guess you are looking for a book that provides a variety of techniques that might be used, with descriptions of how they are implemented, things to watch out for in doing so, what benefits and downsides they each may have, etc., so that you can get a comprehensive view and make choices here and there, from time to time. If that is the case, I don't know where to go and would ask that if _you_ find such a book to let me know about it, too! ;)
One of the best places to go for things like this are integer DSPs. For example, Analog Devices printed an excellent book on the general topic regarding their ADSP-21xx processors. I think some of it (or all of it) may be available online, too.
The pair of books I'm thinking of here are: "DIGITAL SIGNAL PROCESSING IN VLSI" and "DIGITAL SIGNAL PROCESSING APPLICATIONS: Using the ADSP-2100 Family." The latter one, for example, covers a host of fixed point and floating point arithmetic operations, function approximations, and so on. In enough detail to implement on other processor families. I've ported the ideas from time to time, so I know this is quite true.
Another excellent reference of another nature is to have is one of the incarnations of "Numerical Recipes." If you don't have it get that, too.
If you aren't fully sharpened on your own algebra skills, get a good algebra book and work on that. If you don't have at least a 1st year's understanding of calculus, focus on that, as well, and get a book book on calculus. My own preference would be that you go further and complete at least some of a
1st/2nd year's diff-eq. If you have a community college available, that would be an excellent resource to take advantage of sooner than later.
It goes a LONG way to be able to actually _read_ with _understanding_ what you see so that you can modify/tailor it to your specific needs. Otherwise, you are just blind.
Just to provide an example that you bring up, the integer square root question was answered about the way I might have by Hans-Bernhard Bröker. When I first faced the need for one of these on my own, I didn't go to a book at all. Instead, I just remembered the hand-method I'd been trained to use in high school (or was it slightly earlier?) and implemented it with an algorithm. Worked perfectly well, once I nailed the rounding issue correctly. (That was the only detail that didn't get nailed down on the first try because I didn't think about it before writing the first code sample.) So training helps you when all else fails you.
You _must_ be able to find unending sources on this topic. For example, most everyone points to Jack Ganssle's "A Guide to Debouncing" which is often named "debouncing.pdf" as a place to start on the topic. He provides a nice survey.
Once you land on something you like to use, you will probably stick with it for most uses because you'll understand it well. And besides, I don't see people caring that much to gaining a comprehensive view on it, anyway. Many find "something that works" and stop dead in their tracks after that. I think there is more to learn and it is fun to continue that education. But most seem to disagree with me and stop as soon as they find something that works for them and they feel they understand.
I use the following logic applied to an interrupt interval of
8ms: IF current previous THEN state = 1 ELSEIF state = 1 THEN state = 0 ELSE debounced = current In other words, the above logic executes once each 8ms.
With a little thought to the above logic, you should be able to see that the resulting state is always the value that results from XORing the current and previous values. An XOR operation is usually pretty efficient. Also, note that the debounced value is changed only when the prior state is 0 and the current and previous values are the same. This results in the following sequential steps:
1: previous = current
2: current = READ PORT PIN(S)
3: temp = (previous XOR current) OR state
4: debounced = (debounced AND temp) OR (current AND NOT temp)
5: state = previous XOR current
This logic then requires the current condition of a switch to remain at the same level for three observations before the debounced value is updated.
It's not just a sugestion that this method can be applied to
8 switches at a time as easily as it is to one, when the switches are lined up in a single port byte. If you want this to be efficient and usable for more than one switch it helps to place them on a single port.
??? I tend to go to the standards docs for some of these, or the datasheets for others. What are you really wanting here? A "complete skillset" stuffed into your brain?
3rd year CS classes get into general ideas and force you to write test programs to analyze _some_ of them. I write my own and don't use any commercial or free system others write because my needs are specific enough to require my having a general skill that is deduced to specific cases. I gather you want to simply use something as a drop-in and use it reasonably well. For that, I'd probably go to (and hope for) the good documentation often provided by those writing (and using) what you are using.
Oh. I see. If you find Knuth for Micros, let me know. Actually, I was kind of thinking of recommending you read Knuth until you said this. I guess you already have that much and want to see such a tour de force done anew.
If you think about what happened with Knuth, he took on a HUGE project and stated that there would be (if I recall) four more books. In fact, I think he titled them back at the start and listed what he expected to produce. However, in doing those first three, which took years, he just stopped. It was that huge. Then he decided he needed to take on typesetting so that he could get back to writing. And that one he expected to see last only a few years. Instead, he took a decade and wrote a nice 5-volume set on typesetting and then went on to develop a toolset (TeX.) I know he is working on some of what he'd intended 30 years ago, now, but I've no idea if he will ever finish!!
And you want someone to take that on, now??? Most sane people would look at Knuth and say... "Uh, no. I have a life. I think."
But yes, if you find the new Knuth please let me know!!
Yep. The "NR in C" code _must_ be the daftest collection of C code anyone ever dared to put in print, with the possible exception of H. Schildt's oeuvre.
Maybe the only responsible way of recommending the NR books is to tell people to get an edition of it for a programming language they're _not_ going to be using --- and this being c.a.embedded, FORTRAN should fit that bill nicely ;-)
That way they won't feel tempted to actually use the code, and just apply the knowledge instead.
Hehe. Well, the very first edition _was_ in FORTRAN. So I never did have the problem of deciding whether or not to copy out the code into a c compiler's input! I did buy the c version later, so I have two editions on the shelf. One with a yellow dust jacket and the other with a red dust jacket. But I don't use them for their source code. :)
Jon Kirwan wibbled on Friday 05 February 2010 21:08
Sorry - badly phrased on my part. I meant, "I never had to care before", "now I do" :)
Well, it was worth the question.
Bursty is probably nearer the mark. I would like to experiment with RF links
- to this end I have a choice of Xbee (20 quid and does all the hard stuff) or (eg) CF2500 which does all of the Layer 2 stuff I want, but I would implement some addition layers on top. The point in question is to do with whether it is better to pack certain data (eg sub protocol type) into a few bits for bandwidth efficiency, or just use a full 8 bits wastefully because it's cheap to process.
Good point about C. I use GCC for AVR work and I have made a point of looking at the assembler listings so I can see what it's doing at various optimisation levels.
Yes. I was tossing up whether to go for AVR or PIC24. In the end, the choice of opensource linux hosted compilers for PIC seems rather less ubiquitous than AVR and some of the nice devices don't seem to have any support. So I'll stick with AVRs - nice fairly regular little machines and GCC seems to work well with them. Don't have any problems popping some inline assembler in where needed.
Most likely. There are some very useful 1-wire temperature sensors I've tried (using someone else's example code). Some of the timings are horrendously short (order of 15-60 uS IIRC). Though again, it's bursty - I generally don't want to hold a continuous conversation with one.
Then I must read that. I'd been considering delta queues (but realising there may be other solutions). The general idea I had was to run the 16 bit timer at something like CLK/16; task would be inserted into the queue with a delta-tick until execution and if the task was at the head of the ordered queue, then that would set the timer's comparator to trip at the time required, firing an interrupt. The ISR and queue popping and task calling should be O(1) so that tasks being executed have reasonably constant latency prior to execution of the "useful" code, but task insertion (done at the end if the task needs to reschedule itself) need not be O(1) (accepting we will need to shuffle the queue or some other technique for possible mid queue insertion).
FreeRTOS looks great as a way of managing heavy tasks and coroutines, but task are expensive (lot of stack operations for the context switch) and the timing control is milliseconds not microseconds granularity - great for human interface stuff, less useful for frantic bitbanging.
Something like 1-wire really just wants to schedule a task for executing in dT=few-tens of uS, but the task is trivial - check a port pin and push the result somewhere, then add a task for checking the pin again at a variable time in the future depending on protocol state. I think this would be analogous to a linux bottom-end driver? Interpreting the data collected would be more suited to a non ISR task and could happen non critically after many bytes had been stored (top end driver).
OK - I could use a timer directly, but there are other tasks I may need a timer for so I thought it an interesting exercise to see if I could come up with a general solution that was efficient enough to run perhaps half a dozen different tasks in ISR space using only one hardware timer.
Having has a look at the FreeRTOS code, I could probably write a driver for that that used a "virtualised" timer rather than a real one. The two subsystems would complement each other quite nicely.
That would be very kind - if you have the time. My email addy above is valid.
Yes. AVRs have (at teh higher end) loads of flash and quite a bit, but not lots of RAM. So probably the most obvious solution is an invariant boot block and 2 flash sections - that's certainly a technique I've seen used by Extreme for their network switches. But again, I wonder if I'm missing ideas due to a lack of general knowledge.
If only they did that in school now! I know what you mean though - back in
1988, I did an exercise on a 68000 to produce a sine output on a DAC. The expected solution was a lookup table. I did it with Newton-Raphson in integer arithmetic and it worked really well :)
No, not really. It was more of an understanding of sampling and time methods that were efficient and reliable should I need to implement those in software. OK - there's usually always a UART, so 232/485 are solved. I2C isn't so bad because there's an explicit clock that can be tied to an IO interrupt. I guess the real problems are quite a small set of self clocking or fully time based signals.
Sounds like Russel and Whitehead when they wrote Principia Mathematica - with the idea being to formalise all of maths and logic starting with proper axioms and solid derivations. "1+1=2" featured somewhere in Volume 2! By the end of Volume 3, they'd got into calculus. Unfortunately, (I believe the story goes) they were so fried, they didn't get to realise their dream of continuing the process and deriving as yet unknown mathematical methods.
Thanks for all your extensive comments - I'll take up those leads as well as the others put forward.
Managers, politicians and environmentalists: Nature's carbon buffer.
D Yuniskis wibbled on Friday 05 February 2010 16:53
ISRs - yes, that's where I'm looking to optimise. I'd noticed the swap- nybble instruction. Couldn't see a use for it. Given it's there, it must be useful: therefore I'm ignorant ;-> Hence the desire to do some general reading.
Good point. GCC and ISRs on AVRs *seem* to have a fairy efficient context switch but I'll have to verify that in this case.
Thanks for all that - It makes sense.
I think I do :) AVRs (Megas anyway) are pretty flash heavy and RAM light. Unless one needs a lot of stored const data, then that can eat flash fast.
I'll look out for that :)
Thanks again for all the good info.
Managers, politicians and environmentalists: Nature's carbon buffer.
If you find yourself doing "painful" things, ask yourself if there is a *reason* that you must do those things "that way". Most often, you can change how things work so that they better fit "what's easier".
My ISRs are like greased lightning. :> But, I design the hardware with the software in mind. And, push a lot of work out of the ISR onto higher layers where "time" isn't as important. E.g., I aim to reenable interrupts real soon *inside* (certain) ISRs and just burden the code with the task of determining if there was an "overrun" (rather than risk missing a masked ISR).
When you are writing the ISR, ask yourself, "Do I *have to* do this here?" Usually, you can preprocess data for an ISR (or post process it *from* an ISR) to make the ISR much leaner.
There are *lots* of way to handle time in a processor. My point was not to be suckered into a "classic" approach with its attendant costs unless you *know* that solution is right for you. Look at the sorts of times you need to measure (mostly delays and timeouts). If they are short, then why burden yourself with some heavy timer notion (32 bit timers where the top 20 bits are almost always "0"). Likewise, think about the frequency of your jiffy and see if you can decrease it (longer period). This cuts down on interrupt overhead (proportionately) as well as makes timers -- and any other time_t-ish things -- derived from it "narrower" (smaller).
You might be able to move down to a smaller device. Or, pick up some extra integrated peripherals as that portion of the die that would otherwise be used for FLASH can now be used for something "more productive".
I *think* it is called "M6800 Microprocessor Applications Manual". It's an ~8.5x11" format, about 2 inches thick. Black cover with orange (red?) and white on it. (sorry, I can't recall what's white or orange but those are the colors that stick in my mind). If pushed hard, I could go dig through boxes out in the store room but I would *really* hate doing that :>
I've been trying to scan most documents that I want to preserve just to cut down on decades of databooks and appnotes but it's really hard to bring myself to destroy (which is essentially what has to be done in order to effectively scan such a title) classic books like that until I absolutely must. :< Too bad their original authors didn't opt to preserve the original "sources" from which the titles were created!
When dealing with bursty data like that, the usual answer is to use buffers and separate the code into two parts -- the interrupt code that handles the hardware interface and stuffs (or retrieves from) a buffer and then a high level side that allows your regular code to call it to fetch (or put) data. A full duplex serial port driver, for example, would include two interrupt handling fragments and two high level fragments, with two buffers to mediate between them. It isn't hard to write the high level side in c code, most of the time. (Since it doesn't have hard timing issues to cope with, except possibly with "volatile" flags or pointers that may also be modified in the interrupt code.) The interrupt code may very well be written in assembly. Or assembly with a c wrapper around it, I suppose.
This is _very_ good practice.
Well, take a shot at them then. See what results.
Actually, I agree with you on this point. AVRs do seem to get more _readily_ available gcc support. I believe that some of the Microchip c compilers are now based upon GNU's gcc tools, but that the libraries are Microchip IP. Or something like that. Which makes it a bit more complex to siphon over and cobble up a working system, as others may not work quite as hard to smooth over the path and document it for you as they might with an AVR.
That can be handled with bit banging in an assembly routine you call. Or, if you can wrangle the timings with a timer you have available, you _could_ consider a state machine approach driven off of the timer interrupt and otherwise basically hidden from the rest of your code. Again, if you do the state machine approach, you will likely have an upper level routine to stuff/fetch pieces in/out of buffers and just let the state machine flip around its states as it goes. The interrupt driven state machine may be written in assembly if your timing requires it or in c if things are more flexible.
The state machine is actually very easy to design as you go. Just look at the timing diagram and write states up. It's not totally cut and dried, as you need to think in terms of organizing just a little bit. But it is largely cookie cutter and I usually get them working first time out from a timing diagram, if it is well documented. The state machine will be somewhat harder to document inside the code, unless to take the time to provide an ASCII layout of the timing diagram to help explain it. Without the timing diagram, others may have trouble following whatever you write.
Insertion done at the end of some code that doesn't have a fixed timing to its execution does not result in repeatable timing. Insertion done as the first bit of code, especially if that code executes completely before the next timer event happens, does yield repeatable results. Pay the re-insertion cost between timer events, if possible. And since the firing of the code should be nearly synchronous with the last timer event, the first few lines are a good place to struggle for that.
If you don't need all that stuff, don't burden your project. It may leave you having to study stuff you don't care about merely because you have to be aware of it to turn it off or otherwise avoid some issue there. If all you need is a delta queue, they aren't hard to code.
Something like all that makes sense, if I'm reading correctly.
A question you need to answer is what is the longest timer interval you can live with, consistent with the resolution you also may need. Also, when implementing delta queues in the past and where I had a dedicated timer for them, I shut them off when the last task is removed and turn them back on when the first task is inserted. You don't want to burden the cpu with aimless timer events that serve no purpose, even if the test for "nothing to do" is quickly performed.
We talked about debouncing, but the timing is say 8ms per sample. I like rock-solid keyboard sampling, but since sampling edges may be usefully allowed to be a little sloppy without folks noticing, you could write it something like this. (Assume your basic timer interval is 20 microseconds, just to pick a number.)
Something like that. You could wrap up 'debounced' with some code to access it, instead of putting the variable itself in the program-global namespace.
I'll do that. Give me a little time to isolate the pieces.
I don't have comprehensive knowledge in this area, mostly just some random ideas here and there. I'm probably missing ideas, too. Which is what makes having do so something like that fun.
It's my belief that not enough is required of CS students in the math area. The EE and CE folks get perhaps just a little too much math for a CS student. But the CS departments I'm aware of don't require enough, in my opinion.
Well, it works and latches states pretty reliably.
Now on this score, I don't mind just sitting down and working through solutions as I find them. That will mean doing some searches to help ensure I'm not forgetting something important or to provide me with robust ideas. But this is an area where I don't focus on acquiring broad knowledge without a specific application in mind. Other areas I do push for developing my skill sets. In this area, I'd let the application push them.
The one exception to that is open collector/drain shared lines where _I_ get to decide what I like doing. I wrote myself a paper a while back and use that when I need to rethink a new design. So working out a general approach in this area does help.
And I think Dedekind and Weierstrass's calculus basis is Rube Goldberg compared to the simplicity of accepting infinitesimals. Abraham Robinson's hyperreals has helped bring 'sense' back where it was sorely lacking, a century later, but not without some mental cost and incompleteness.
No problem. And I'll get some code shoveled off to you.
You maybe should look into the Event system in the AVR XMEGA series. Toggle an I/O pin, could cause an event. The event could cause a capture, which causes the DMA to write the capture value to SRAM
An AVR32 has some interesting capability here.
Low cycle interrupt. 6-12 cycles interrupt routine would read one prepared dword from SRAM = 1 cycle write to I/O port forcing pin to toggle if corresponding bit is "1". prepare next dword. ? return from interrupt. a few cycles. Sould give you at least us resolution on 32 outputs.
with its capability to write to I/O once per CPU cycle,
Ulf Samuelsson wibbled on Saturday 06 February 2010 23:29
The Xmega certainly looks interesting - just skimmed on of the datasheets. The DMA and the event system do look cute.
I'd ignored those devices due to being SMT only but maybe I should bite the bullet, get a new iron and "upgrade" - most of the hand soldering howtos imply the coarser pitched leaded devices aren't too bad to do.
Managers, politicians and environmentalists: Nature's carbon buffer.
Even the finer pitched footprints, like a 0.5 mm TQFP, are quite achievable by hand, as long as they have legs. It does take some magnification to inspect for bridges, and some (lots of!) flux and wick to de-bridge. The bastards like LGA or BGA with the pads only underneath the package are the ones that get tricky.