books for embedded software development

[much elided]

[more elided]

Level-sensitive interrupts could be the answer here, if you are not already using them. From memory (and it is quite a long time since I designed one in), the ADSP-21k series allow(ed) you to specifiy interrupts as either edge- or level-sensitive.

--------------------------------------- Posted through

formatting link

Reply to
RCIngham
Loading thread data ...

(I realize this is just a sidebar in a much more interesting post.) The motivation would have been: you should reset the watchdog in some very high level process that truly reflects that the processing is being done. Resetting the watchdog on, say, a timer interrupt keeps the watchdog happy even if the only facilities working are the timer and the interrupts. The rest of the processing could have completely fallen off the rails, and the watchdog wouldn't reset the system.

You can see this happening on a desktop sometimes when the OS has become wedged, nothing is being processed, but the little arrow on the screen still moves when you move the mouse.

Mel.

Reply to
Mel Wilson

True. If one is stuck in an endless loop in the main process but the watchdog kick occurs in a timer interrupt then one can get hung without the watchdog reset occurring.

One way to avoid that is to have the main loop set a permissive flag and the timer interrupt test for and reset the flag and then kick the dog.

Of course, if the bit of code in the main loop that sets the flag is included in the stuck endless loop, one still ends up with a broken system. A defense against that is to use multiple flags or, perhaps, a multi-valued flag: set to 1 at the top of the main loop; somewhere inside a must-run portion, if flag is 1 then flag is 2; possibly additional if/then levels; and finally have a periodic interrupt test for the terminal value. Kick the dog only if all intermediate steps have occurred.

--
Rich Webb     Norfolk, VA
Reply to
Rich Webb

I strongly agree about the unsigned int issue. _Every_ integer I declare in C is unsigned unless I actually need a signed integer.

I find that the number of unsigned integers in my code is _vastly_ greater overall than the number of signed integers.

Personally, I think C should have made unsigned integers the default.

BTW, on a related data representation note, another thing I like to do is when I build something like a state machine is to start the numbers assigned to the state symbols at value 1, instead of value 0 so that I stand a greater chance of catching uninitialised state variables.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

Hell NO.

Been burned many times with mixed signed/unsigned arithmetic, I consider all of the integers signed unless they explicitly meant to be unsigned.

Bad style. State variables should be special class or enumerated type.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

I'm quite certain you got that wrong. 'ar' doesn't even care about object file formats at all unless you try to use the 's' modifier. And I've been using ar with COFF format file for about a decade --- it's what the DOS port of of GNU tools has been using forever.

You may need to use a target-specific build of 'ar', though. I.e. g21-whatever-ld and g21-whatever-gcc are meant to go with g21-whatever-ar.

Reply to
Hans-Bernhard Bröker

Is that more to do with how C handles signed/unsigned type conversions or some issue around signed type conversions in general ?

I think I know where you are coming from; I've seen reports of various type conversion issues in C which surprised me, but given the type of things I use C for (mainly low level work), I have not yet been caught by them.

Still, I know this is a issue that people have differing opinions on for various reasons and I realise that not everyone prefers unsigned integers.

Sorry, bad wording. They are in a enumerated type; it's just that I set the first state symbol in the type to start at one instead of zero.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

I start at 0, and make that the initial state.

Reply to
Arlet Ottens

I try to declare types for damn near every (ahem) "type" of datum I use. I never use bare "int"s -- "signed" or "unsigned" is explicit in each typedef.

There are times when I need a signed datum. But, there are also lots of times when I *don't*. The problem lies in using a data type that makes "troublesome" values possible.

I also try to use control structures that make certain aspects of the data "more obvious". E.g., if "count" is expected to be a nonzero value, you are MORE likely to find:

ASSERT(count > 0) do { // some stuff } while (--count);

instead of an equivalent for()/while() loop. (i.e., I've just disciplined myself to see this mechanism as "you ARE going to do this, at least ONCE" whereas a for/while loop forces me to think about whether count *can* be zero going into the loop.

I think C should have required the signed/unsigned syntax. Why are ints different than chars (also an integer data type)?

I tend to build state machines by tabulating the state tables:

state_t states[] = { ... &some_state &someother_state &yet_another_state ... }

#define NUMBER_OF_STATES (sizeof (states)/sizeof (state_t))

Then, explicitly setting the initial state to correspond with . state_t is then more of an enumeration. I do this because I support "special next_state encodings" like SAME_STATE, "POP_STATE", etc.

Reply to
Don Y

Hi level but low(er) priority. I.e., you want that process to run when you *know* the higher level process that is really running the machine" has been able to get its work done. I.e., having the highest priority process perform this duty just substitutes a jiffy-driven task for "the jiffy ISR"!

:-/

(figuring out where to stroke the watchdog in any particular system can be tricky -- you want it to indicate that the system *appears* to be working without forcing it to have a special role)

Reply to
Don Y

Hi Alessandro,

[I think I may have to split this into two replies due to its length :< ]

Bootstrap resides in ROM? And, copies the loader into RAM for execution? (Or, is loader also executed from ROM?) And, once the desired executable image is "loaded", the space consumed by the loader can be released for other use?

(i.e., why is the loader still resident after the main application has started?)

But, is it in the loader *acting* like it should when normally entering the loader? Or, does it look like it "jumped" into the loader at some random spot? I.e., is the loader waiting to be commanded to load an image?

Could the code have executed past the end of physical memory? Could a return address on the stack have been corrpted? (Or, stack protocol violated so you're "returning" to a *data* value instead of an address)

Can you modify the loader so that it verifies certain things that it *expects* to be true when it is initially CORRECTLY invoked and then see if the bogus loader invocation can detect that these "things" aren't as they should be? I.e., let the loader notice that it is executing when it shouldn't be and take some remedial action.

I don't know how critical your timing is -- how tolerant you would be of ISRs. When it comes to serial ports, I usually implement an Rx interrupt -- so that I don't have to worry about getting around to polling the receiver often enough to avoid losing a character. (this depends on what sort of incoming traffic you have to accommodate and how quickly you can get around the jobstack).

The Rx ISR pulls the data out of the receiver AND the "status" (error flags) and pushes this tuple onto a FIFO.

[This assumes I need an 8-bit clean interface and can't unilaterally decide how to handle errors *in* the ISR. E.g., if the application layer wants to see the errors and possibly reconfigure the interface like autobauding. If I only need a 7 bit channel, I can cheat and pass 0x00-0x7F values to the FIFO whenever there are NO errors. And, a 0x00-0x7F value followed by a 0x80-0xFF value that represents the error codes associated with the previous datum. This allows the FIFO to appear larger when there are no errors.]

If the FIFO ever fills, I push a flag into the FIFO that indicates "FIFO overrun" -- much like the "receiver overrun" error flag. This lets the application determine where gaps are in the data stream. It also lets me see if I've got my FIFO sized well.

In other words, I have preserved all the information in the FIFO so it just gives me some temporal leeway in *when* I "process" the incoming data.

If you have to support pacing, this gets marginally more complicated. But, the point is to avoid doing anything other than keeping the *link* running. Any processing (analysis) of the data is done elsewhere.

If I have lightweight event support, I signal a "data received" event so any tasks waiting on incoming data are awakened (made ready).

[You can also put a tiny FSM in the Rx ISR so that the event is only signalled if it *needs* to be signalled -- because the ISR is in the "event not yet signalled" state. This lets you trim that system call from the ISR]

There is more flexibility in how you handle the Tx side. At one extreme, you can poll the transmitter and pass one character at a time out whenever you have time to check it.

I usually have an FSM in the Tx ISR. When there is nothing to tranmit, the Tx ISR sits idle -- with the Tx interrupt disabled. Whenever anyone wants to send data, the data is appended to an outgoing FIFO. Again, if I need 8-bit clean channel *and* I need support for BREAK generation, pacing, etc., then I pass a tuple that tells the ISR what needs to be done.

A task monitors this FIFO and, if anything is present in it, "primes" the transmitter with the first character and enables the ISR. Thereafter, each IRQ pulls a character from the FIFO and passes it to the tranmitter. When the Tx FIFO is found to be empty, the ISR disables itself and signals an event to alert the "primer" task.

This allows the transmitter to run "flat out" when necessary. And, it is easy to throttle the transmitter without making big changes in the code (e.g., force the ISR to shut itself off after each character so that the primer has to restart it).

Coming from a (discrete) hardware backround, I am a *huge* fan of FSM's (and synchronous logic)! You can implement FSMs in a variety of ways with varying resource consequences.

I like FSMs to be driven from above. I.e., there is a state machine *mechanism* which receives "events" and uses "transition tables" to determine how each event is handled, the state into which the machine should progress when processing that event *and* how to process it (i.e., "transition/action routine").

For a user interface, these state tables might look like:

State NumericEntry { On '0' thru '9' go to NumericEntry using AccumulateDigit() On BACKSPACE go to NumericEntry using ClearAccumulator() On ENTER go to PopState using AcceptValue() Otherwise go to SameState using DiscardEvent() }

AccumulateDigit(event_t *digit) { accumulator = (10 * accumlator) + *(int *)digit; acknowledge_event(digit); }

etc.

[This should be fairly obvious. The only tricks being the SameState keyword (which says "stay in whatever state you are in currently") and PopState keyword (which pulls a state identifier off of the "FSM stack" and uses that as the next state -- this allows a set of state tables to be used as a subroutine)]

This sort of table can reduce to as few as 4 bytes per entry. If you eliminate support for the "thru" feature (i.e., force anentry for *every* possible event), then you can cut that back even further.

And, since the processing is trivial, all of the entries can be examined very quickly (you would put the most frequently encountered events in the earlier entries). In ASM, this can be fast as greased lightning (i.e., *in* an ISR).

Note that this example is treating keystrokes (conveniently named with the character codes they generate!) as "events". I could modify this:

State NumericEntry { On '0' thru '9' go to NumericEntry using AccumulateDigit() On BACKSPACE go to NumericEntry using ClearAccumulator() On ENTER go to PopState using AcceptValue() On BarcodeRead go to PopState using ReadBarcode() Otherwise go to SameState using DiscardEvent() }

ReadBarcode(event_t *event) { accumulator = get_barcode(); // overwrite any keystrokes acknowledge_event(event); }

etc.

Note that each transition/action routine (function) explicitly acknowledges each "event" that it "processes". This allows a transition routine to *propagate* an event - by not acknowledging it so that it remains pending.

But, more importantly, it provides a means by which the FSM can signal the "event GENERATOR" that it has been processed. This gives you flexibility in how you implement those events. E.g., an event can be as simple as: event_t ThisHoldsTheEventsForTheFSM;

Anything wanting to signal an event can spin on this variable and, once verified to be "NO_EVENT", feel free to stuff *their* event code into it. Then, keep spinning waiting for the value to *change* (back to NO_EVENT *or* some OTHER event!) which signals that the event has been recognized and acted upon. (e.g., when BarcodeRead has been acknowledged, the barcode reader could re-enable *itself* -- so the application doesn't have to be aware of this detail/requirement)

[you can also prepare lists of event_t's that the FSM scans so that each event generator has its own event_t through which it signals the FSM]

Since an event can come from *anywhere*, you can provide: UART_Receiver.c event_t UART_Receive; // fed by Rx ISR UART_Transmitter.c event_t UART_Transmit; // fed by Tx ISR CCD_Handler.c event_t CCD_Ready; // fed by CCD driver etc. and these can feed one big FSM -- or three different ones (all "driven" by the same FSM interpreter)

Of course, you can also have: event_t UserInterfaceBusy; // signalled by UI FSM through which one FSM (ie., the one running the UserInterface) can pass events to another FSM! (Similarly, you can also use an event_t to allow a transition/action routine to pass "information" back to the FSM that invoked it! This allows you to keep decision making in the "state tables" while moving the mechanisms for detecting criteria into the action routines -- e.g.: event_t feedback = numeric_value_entered_was_zero;

The advantage of this sort of approach is that the action routines are simple functions. They dont have to worry about changing states, etc. The "machine" is very obvious in the representation of the transition tables!

OK. But that should be something you can grep from the sources.

I tend to like skinny interfaces. Give me a few *flexible* commands that I can use to convey what I want -- even if it requires the issuing of several commands to get things done. This makes it easier to test he interface completely.

So, the FPGA is a sequencer, of sorts, that "drives" the CCD. And the register governs the operation/options for the FPgA. (?)

I don't understand where the 1920 limit comes from? That suggests a 19200 bad rate. Where does the 256 byte limit come into play?

I'm not sure I understand your explanation. But, it seems to still boil down to bandwidth?

Yes. This is how my black boxes work. Hint: either provide a streamlined "bbprintf()" (Black Box printf) *or* discipline yourselves to only use inexpensive conversion operators. E.g., use octal or hex instead of decimal. Likewise, dump floats as hex values. You can write a post processor to examine those values at your leisure. The logging can then incur very little RUN-TIME processing overhead.

I like to instrument my tasks with a stderr wired to a communication channel in my development library. So, each task can spit out progress messages that are automatically prefaced by the name of the emitting task:

motorstart: building acceleration profile uarthandler: initializing UART uarthandler: allocating input FIFO motorstart: initializing runtime statistics uarthandler: allocating output FIFO motorstart: ready to command

During development, I route these messages to my console so I can see where my code "is" without having to break and inspect it. This is especially useful as I can tell when all the initialization cruft is out of the way and the code should be "active".

Yikes!

Bad specs are worse than no specs.

Why invest the time in writing them if you aren't going to write *good*/effective ones? It's like folk who purchase "backup devices" (tape, disk, etc.) but never USE them (cause they are too slow, tedious, etc.)

Sit them around a table and tell them they are responsible for debugging the code written by the person to their LEFT! :>

Suddenly they have an interest in how *that* person writes their code!

I never fret "spacing" issues -- indentation, where braces are placed, etc. -- as those can be handled by a pretty printer. But, things like how identifiers are chosen should be done by some sort of concensus. SpeedMax vs MaxSpeed vs max_speed vs. ...

What is usually most important is how data flows through a program. Communication often defines performance. Needless copies, extra synchronization, etc. can quickly exceed the processing costs. I let the data paths define the structure of my tasks, etc.

You can implement VERY lightweight executives that still give you a basic framework to drape your code over. The problem with trying to do things as one procedure is that you, then, have to make and carry around mechanisms to give you any concurrency that you might need. That's why things end up migrating into ISRs -- because it represents an existing mechanism that can be exploited "alongside" your "regular code".

Depending on how "rich" your environment is, you can probably implement a non-preemptive multitasking framework with very little effort. This also has the advantage of allowing you to carefully sidestep a lot of synchronization issues: you have control over all of your critical regions! If you don't want to deal with the possibility of another task "interrupting" what should be an atomic operation then don't *let* it!

Coding in a HLL complicates this from the resource perspective. Mainly because of the pushdown stack and knowing how much is committed at any given point in time (where a reschedule() might happen).

This is because the code can crash -- yet the ISR will (usually) still run! So, your application has died (needing the watchdog reset) but your ISR is happily stroking it to keep it from doing its job. I.e., code is stuck in an idiot loop executing NOOPs... ISR comes along, watchdog is stroked... code resumes its idiot loop...

Whether or not you *fix* it, I think it is important that you identify the issue(s) that are preventing it from operating as intended. Those issues can reappear elsewhere, later, and cause you much pain.

If you have data dependencies also obvious, it can give you an idea of where you can speed things up. E.g., if you are pulling data off of the CCD at , you might be able to simultaneously process some previously extracted data and cut down the total time required (by utilizing "waiting time" as "processing time")

You share when there *isn't* enough available! :> I've worked on resource starved designs where individual *bits* were reused based on what the device was doing at the time. (it is not fun)

In the event of a crash (or something "unintended"). E.g. if the watchdog ever kicks in, you can examine the contents of those logs *before* overwriting them (like a crash dump) to determine why the watchdog wasnt stroked properly.

Emulators are cheap (relatively speaking). Of course, you might not be able to FIND an emulator for a particular processor! I started my career burning 1702's, plugging them into a prototype and *hoping*. Even with multiple sets of "blank spares", this limited me to ~4 turns of the crank in an typical day. Emulation, symbolic debugging, profiling, etc. are well worth the equipment costs!

Even without an emulator, you can often make small changes to the hardware to greatly increase your productivity. Being able to quickly update "ROM images" saves lots of time. Whether it is FLASH or static/BB RAM... as long as it can be erased/overwrtten quickly and reliably in situ (add a write protect switch).

You can also write a simple debugger to reside in the image. THis can allow you to examine and modify the state of the processor and its processes. If you have a multitasking environment, that debugger can allow you to debug a process while others continue executing normally. (if processes have interlocking semaphores/etc., then stalling the one process will cause the dependent processes to stall accordingly)

Even "crude" tools like this can give you significant advantages. E.g., look into supporting the remote stub for gdb if that fits your envirnoment.

Reply to
Don Y

They aren't in my view (at least in C). As chars are assumed, like ints, to be signed unless declared otherwise my mental model of a C bare char declaration is a 8 bit signed integer so my comments also apply to char declarations.

BTW, I like the idea of requiring signed/unsigned. It would reduce the chance of a lazy programmer just declaring everything as implicitly signed because they could not be bothered to type "unsigned". :-)

OTOH, I'm the kind of person who likes Ada's strict type system so I accept I'm probably not the typical C programmer. :-)

Interesting.

The state machine was just one example. What I was trying to say is that in anything in which I need to maintain state in C, whether it be a main loop state machine or a tree like data structure with nodes in various states of processing, the variable I use to maintain state information is a enum with it's first valid state symbol set to one.

The idea is that since the state variable is likely to start out as zero (.bss or zeroed allocated memory), I'm more likely to catch coding errors (I don't assume I'm a perfect programmer :-)), rather than just assume some node has been correctly setup in it's initial state.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

The signedness of bare chars is implemented defined, presumably to allow the compiler to pick whatever is easier to implement on the target.

Reply to
Arlet Ottens

Hi Simon,

Ah, but the signedness of chars is defined by implementation -- hence my comment. In other words, if you are going to let the default vary for chars -- thereby requiring me to explicitly indicate a char's signedness -- then why doesn't it, also, vary for *real* ints??

IMO, if I discipline myself to type "(un)signed", then I have to

*think* about that for at least a moment: "What do I really want this to be?". Ditto short/long/long long/etc.

Understood.

Note that some implementations fail to clear that memory on startup! I.e., "0x27" is just as likely to be an uninitialized value.

I like to initialize values when they are declared. Sometimes, even deliberately picking a value that I *know* is incorrect (0 in your example).

Reply to
Don Y

Interesting.

That's something I'd forgotten about as I always declare them in full. Many years ago, before I started doing that, it seemed that every compiler I used treated a bare char declaration as signed.

Is there literature somewhere online which compares what the current major compilers do on various architectures/operating systems about implementation defined issues like this ? I had a look before posting, but I couldn't find anything.

BTW, that's a major argument for the position that the programmer should be required to explicitly declare the variable as signed or unsigned.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

For most folks, signedness of chars isn't an issue -- *if* you are using them as "characters". E.g., 7b ASCII doesnt care. Even 8b encodings still let you do many simple arithmetic operations on chars without concern for sign. For example, '2'++ is '3', regardless of encoding.

However, ordering arbitrary characters an vary based on implementation.

Where signedness is a real issue is when you try to use chars as "really small ints" -- short shorts! There, it is best to define a type to make this usage more visible (e.g., "small_counter") and, in that typedef, you can make the signedness explicit.

Reply to
Don Y

And, since I declare all variables in full, I had forgotten about that little implementation defined feature as it's something I don't have to tackle these days. :-)

Yes, I'm aware it's a possibility. On bare board embedded code, I now write my own startup code (I don't use vendor code any more) which does do this.

In normal operating systems, I make sure any dynamically allocated memory is zeroed in my code.

Simon.

--
Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
Microsoft: Bringing you 1980s technology to a 21st century world
Reply to
Simon Clubley

Such implementations would have to be classified as spectacularly broken. You would need a pretty strong excuse for using any such toolchain despite such failures.

No. It's possible, but nowhere near as likely.

You can make some implementations not intialize the .bss region (or meddle with the startup to that end), but the ones where that happens by accident are certainly _far_ outnumbered by the standard-conforming ones.

Reply to
Hans-Bernhard Bröker

I *deliberately* remove that activity from crt0.s I want to see an explicit initialization for each variable.

(this is c.a.e not desktop.applications.runonce)

Consider what happens when you restart a task. Those variables don't get reinitiaized. So, the second time a process runs, it uses whatever bit patterns happen to reside in those memory locations.

"Gee, it worked LAST TIME; why is it broken, now?"

(of course, if you have a really flush OS that can load processes and initialize their environments (like a desktop) then you don't face this problem)

So, if you are going to explicitly initialize all variables and structs, its wasted effort to also do it on startup.

(I like being able to examine memory contents on startup "before" initialization in post-mortem analysis... it doesn't do me much good to have memory cleared before any useful code can examine it!)

Reply to
Don Y

In that case it's not "the implementation" that fails to clear that memory. It's you. There's a difference.

I consider that remark deliberately misleading. Embedded vs. desktop has nothing to do with this.

Since when did embedded become synonymous with "wasteful"? Static variable initialization at startup, and particularly the part of it commonly known as "bss", is an optimization strategy. You disable that at a cost to code size and start-up speed which can be significant. You'll be calling gazillions of individual memcpy() and memset() equivalents, where a single one of each might have sufficed.

Consider what happens if you don't.

And if you initialize them all explicitly, even though re-initialization wasn't actually needed, you'll generate considerably more code for that particular job than if you had allowed the linker to help you with it.

Reply to
Hans-Bernhard Bröker

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.