Mealy vs. Moore FSM

That is exactly true for software. To the perspective of the algorithm, there can not be a change in inputs while the code is processing. The inputs have been read and handed to the code. At that point they are all stable as if they were registered.

I think you are abstracting too much. Most of the code I write doesn't have "events". The inputs are typically sampled and captured for the code to process much like registered logic.

They are only bad if your interrupt response time is not adequate. I've seen code written that locks out interrupts for many milliseconds. It worked fine because other interrupts could wait that long.

You seem to be stuck in a single model of processing. Not all software uses events or queues. Often the software periodically polls the I/O and processes what it sees, much like clocked sequential logic.

If you want to work with overly complex, large code, fine. I try to work with simpler code that I can verify works correctly by inspection if at all possible.

--

  Rick C. 

  +- Get 1,000 miles of free Supercharging 
  +- Tesla referral code - https://ts.la/richard11209
Reply to
Rick C
Loading thread data ...

You are the only person I have heard who thinks that way.

That's not what I was taught at university. I've just checked a textbook from 1980 which states "The basic distinction of a Mealy machine is that the outputs to the outside world are a function of two sets of variables: (1) the present input condition and (2) the present state of the machine"

My university course and that textbook predated Wackypedia :)

Everybody else disagrees with your definition.

Shrug.

Async hardware FSM synthesis explicitly excludes multiple inputs changing simultaneously, because the trajectory through the state space becomes impossible to predict and analyse.

Well, I ought to exclude very simple async hardware FSMs of the component parts that are used to construct sync hardware FSMs.

Agreed.

Your definition contradicts all definitions I've heard over the decades.

So, have you designed a processor that performed as I mentioned above?

I'm not interested in single purpose problems and processors.

I'm interested in generic processors and techniques that can be used for many problems.

In this case I expect that 8 (to pick a number >>1) inputs can be recognised and processing started within a single clock cycle, and processing for all inputs proceeds simultaneously. Thus if, to pick simple numbers, processing each input takes 1us and the processor clock is 10ns, then I expect all processing to be complete 1010ns after all 8 inputs changed.

Obviously hardware has no problems with that, but traditional processors and software simply cannot achieve it.

Reply to
Tom Gardner

e

g
s
e

ut

e

he

ore

e

d on

e

ere

no

t

ed was

suming

You mean other than Mealy??? So there's at least two of us. But then I ex pect Mealy is dead so I guess I'm the only one left. lol

That does not contradict anything I've said. The original paper by Mealy a ddresses logic that only looks at the input once per clock. When it does l ook at the input it is the "current" or "present" input. Extrapolating tha t to FSM where the input can change at times other than on the clock is the error everyone seems to make.

You only have to consider the state transition diagrams to see I am right. Mealy's state diagrams show outputs associated with the transitions. As I have stated before, if an input changes which could alter the outputs, but there is no state transition, there is no way to notate that in his state diagrams.

Since Mealy's FSM hardware was not capable of inputs changing at any time t hat would not be a "clock edge" and the fact that his notation was not capa ble of notating output changes that were not associated with state changes, it is clear that he never intended to describe such FSMs that altered thei r outputs at any time other than state transitions.

The important point is that the output will reflect the manner in which a s tate was entered as well as the state itself. So the outputs have "memory" just like the state. Let an input that changes horses in midstream into t he picture and you have lost the memory of how you got there.

Except for Mealy. Hey, it's his name on the FSM type. I think we can list en to what he has to say about it. It wouldn't be the first time people me ss up an idea.

state

Shrug? Ok, I think we have enough history to know that if you don't want t o believe something, it doesn't matter what the facts are. You just demons trated that very clearly.

the

o be

ers

at a

ated

sing,

time.

anging

nput

have

race

not a

it

ust

ck,

lly

puts

h
a

ware

cks.

You keep saying that as if I'd said the sky was green and grass was blue. Read Mealy's paper. Actually consider what his state diagram says. What M ealy presented was ideas on how to design state machines methodically using his state diagram. Anything else people attribute to his paper is not rea lly there.

t

he

the

a

g an

does

the

m,

I don't know. Your problem is not actually defined. Of course a processor can process all 8 inputs. They do that all the time with bytes received f rom UARTs for example.

s.

o be

before

Ok, you like your processor. So what? Those processors aren't very widely used. Obviously there are other choices that work just as well.

The way you just described the problem, my processor would easily do this j ob. I said it would respond in 1 clock cycle to any interrupt. It takes o ne clock cycle to execute what amounts to a subroutine call and saves both the IP and the PSW (the entire CPU state).

I've worked on an idea for a processor that combines a stack with addressin g like registers which saves a lot of cycles. I've not tried to implement it yet though.

--

  Rick C. 

  ++ Get 1,000 miles of free Supercharging 
  ++ Tesla referral code - https://ts.la/richard11209
Reply to
Rick C

FSMs are used quite often in video game programming too, to avoid the same problems hairball logic has in hardware.

The software equivalent of hairball logic is if/else statements.

Super Mario can be small mario, big mario, fireball-shooting-big mario and you have to select the right sprite for each one. In each state he can do different actions like jumping. and if he's fire-mario he can shoot fireballs while he's in the air. And the player has multiple buttons as input to control him.

so you can imagine the tangle of logic to try to figure out the player's intent when the player hits the button on the controller to shoot a fireball. Is mario capable of doing that right now? Yes okay. Check to see if he's in the air. Okay load the shoot-fireball-midair sprite. If not he's on the ground. Load a different sprite. But wait what if he's running that needs a different shoot-fire sprite than standing still. check for that.

if/else/if/else tangle of hairball logic. A state machine makes it a lot easier, Mario has his running/jumping/standing little/big/firey states well-defined at all times and there's a transition table for each set of states for the different player button inputs.

Reply to
bitrex

Even so, the Japanese programmers in 1983 or whatever were not perfect at implementing that state machine for the 6502 and there are ways to make it glitch into forbidden edge-cases that should not be allowed according to the rules of the game like this one.

The game doesn't crash but Mario grows taller and then shrinks every time he shoots a fireball because a sprite for him shooting a fireball while little doesn't exist for that unintended state.

Reply to
bitrex

Sigh.

The key word is *independent* inputs.

UART bytes are eight bits from the same input.

I did not specify the input width. With the xCORE processors they can be 32 bits.

To take an example, can your processor deal with these inputs simultaneously: - message from a USB interface - message from an ethernet interface - ADC input - a serial bitstream at 250Mb/s, SERDESed into 32 bits - two audio inputs (guaranteed latency is apparently critical!) - and a few others I can't be bothered to dream up plus the DSP/control algorithms associated with them

That's boring; even 6800s can respond in one instruction cycle!

What is interesting is if a processor can respond to many interrupts with guaranteed latency.

Variations on a theme, not fundamentally different.

Reply to
Tom Gardner

That's one technique, and a technique that often leads to unmaintainable spaghetti code. I've seen a commercial product made like that with 10-deep if-the-else nested statements! Madness.

Another technique is a 2D table, where one dimension's index is the current state and the other dimension's index is event. Each table entry is a function pointer to the action taken, plus the next state. That makes damn sure you explicitly enumerate all possibilities, including the "can't happen->panic" cases.

Another technique is well suited to complex FSMs that specified in a hierarchical fashion (e.g. Harel Statecharts) There each state is a OOP class, each event is a virtual method in the classes. The abstract top level superclass implements each method as "can't happen, record and panic". Where a state consumes a method it re-implements the virtual method. The current state is simply an instance of the relevant class.

Maintaining/enhancing if-the-elses over the years is problematic. Each delta is sufficiently small that it isn't worth flipping to a better implementation strategy, but after a while it is a pig understand the FSM sufficiently to make the next delta without subtly buggering up the existing operation.

For small FSMs that will never change, that isn't a problem.

Reply to
Tom Gardner

A Moore machine changes state at discrete instants in time, it is a clocked machine. The output depends only on the state. In a Mealy machine the output depends on the state

*and* on the input. It can also change state whenever the input changes. There is no clock in a Mealy machine.

Software runs on clocked CPUs, so strictly speaking it implements a Moore machine. I'll grant you that if the time scales of input events and CPU clock instants is different enough, the distinction may become invisible.

Jeroen Belleman

Reply to
Jeroen Belleman

Sounds interesting, for a microcontroller implementation I would avoid multiple virtual methods and use static polymorphism/CTRP along with the Visitor pattern to have one virtual method if possible, do double-dispatch.

because on e.g. AVR vtables are required to be loaded into your limited SRAM at program start even though strictly they shouldn't have to be - goddamn it!

Even so it sounds a lot more elegant than most C implementations of state machines I've seen. blech.

In most modern big-budget video gamez all the game-logic is abstracted away from the graphics and physics code (which is usually C++ and often derived from a third-party off-the-shelf API like Unity, I've tried playing with writing my own 3D 'game engine' and it's a nightmare time-sink even for a simplistic one doing all that OpenGL interfacing yourself) and is done in a scripting language like Python or Lua, or some bespoke virtual machine.

Reply to
bitrex

Nothing "key" about the word independent.

Yes, and they are all independent. Otherwise we couldn't represent 256 characters.

Of course so. Either one processor can handle all the events or use 8 processors like the xcore does. One large advantage of soft cores is the ease with which 9 cores can be used to process 9 independent inputs/tasks. Or 33 inputs and 33 tasks.

Lol! "Boring" is what makes it highly functional in the presence of interrupts.

What is not guaranteed about 1 clock cycle??? You seem to be confused about something.

Actually, yes, it is fundamentally different. But when you are stuck in the mindset that only the xcore is a different idea you can't see it.

This is very off topic now that you have completely stopped talking about FSM. I'm going back to the topic now.

--

  Rick C. 

  --- Get 1,000 miles of free Supercharging 
  --- Tesla referral code - https://ts.la/richard11209
Reply to
Rick C

That is plain wrong. Mealy vs. Moore has nothing to do with clocked vs. async FSM.

Incorrect premise, so incorrect conclusion.

--

  Rick C. 

  --+ Get 1,000 miles of free Supercharging 
  --+ Tesla referral code - https://ts.la/richard11209
Reply to
Rick C

If you define the value of an integer (software variable or hardware register) as the "state" of the system, future states are entered at clock/loop execution time, as a function of present state and some number of synchronized inputs. That's a fully synchronous state machine, with no race or metastability hazards.

Outputs could be purely decoded from the state word, or could be made by a combination of state and inputs. In the latter case, if the inputs are synchronized (and the output logic is done right) the outputs are also synchronous. Preferably, the outputs are all d-flops off the common clock, or the software equivalent. Those flops are just not part of the defined state word.

I guess that's two cases of state machines.

One possible additional case, where the inputs to the state machine are not clock synchronous, is generally too hazardous to use.

Outputs generated by logic of state + inputs could also use unsynchronized inputs, yet another usually-pathological variation.

Reply to
jlarkin

G.H. Mealy, "A method for synthesizing sequential circuits", fig.2, fig.5. No clock or FF in sight.

What, pray tell, is the distinction?

Jeroen Belleman

Reply to
Jeroen Belleman

I haven't used OOP on a tiny MCU, and doubt I ever will. That v-table in RAM seems like an anti-pattern!

But the mechanism I briefly outlined works well. It is

*easy* to instrument so you can keep a record of the event/state trajectory to find out the sequence that lead to a problem. And for telecom systems handling thousands of calls (identical FSM, but per-call state and transitions) it is sufficiently economical to keep a per-call trajectory around.

Such instrumentation is /great/ when it comes to integration with other companies' products - it makes it trivial to be able to point a finger and say "this is what happened; I got it right, you got it wrong at /this/ point". Squabbles and lawsuits are avoided; what's not to like.

So far my tried and trusted techniques are - if then else for up to two four states and two events - jump table for simple dense FSMs that are unlikely to change over time (e.g. parsing characters into floating point numbers) - classes/methods for everything else

Reply to
Tom Gardner

t
e

as

a
e
g

uts

on

is

un

SM

If your FSM is in software, then by definition all outputs are synchronous. If a Mealy FSM has synchronous outputs the only difference with a Moore m achine is that you are calling the outputs... outputs and not part of the s tate. If you lump the outputs with the state and call it all state, then t here is no difference. It's just how you look at it.

Even in hardware I often define my state variables to optimize my output lo gic, most beneficial is when there is 1 to 1 correspondence between individ ual state variables and outputs which is what you get in the case described above.

--

  Rick C. 

  -+- Get 1,000 miles of free Supercharging 
  -+- Tesla referral code - https://ts.la/richard11209
Reply to
Rick C

Sorry, I don't know what you are asking. I believe your thinking is so much different than yours that we have few common assumptions so you need to be more explicit in your questions and points.

I'm saying the theory that Mealy is describing has nothing to do with using async vs. sync logic. Are you trying to say his theory only applies to async sequential logic because that seems to be what he is using in his examples?

--

  Rick C. 

  -++ Get 1,000 miles of free Supercharging 
  -++ Tesla referral code - https://ts.la/richard11209
Reply to
Rick C

In some senses that is true - but it is unhelpful and avoids interesting and useful cases. That's probably because your viewpoint appears to be limited to working in one small part of the "protocol stack", i.e. hardware and embedded systems.

Firstly the level of "synchronism" (the computer clock) is far far too small compared to many FSM changes and propagation delays to be of any real utility.

Secondly many FSMs are defined such that they can be and are split across multiple computers with no common clock whatsoever.

There are many many such FSMs in telecom systems are good examples of both of those.

I defy you to define a clock that helps design an FSM running in a user-space application in a NUMA SMP computer.

Yes, I have designed and implemented such realtime systems, and FSMs were at the core. A key design pattern's name gives a hint as to the level at which system design and modelling and implementation occurs: "half-sync half-async".

In such systems any processor clock is completely irrelevant. Hells teeth, most of the implementers are only dimly aware of L1/L2/L3/etc caches in the processor, and for most purposes they don't need to be.

Reply to
Tom Gardner

At this point you seem to be discussing things with yourself. I don't see how your comments have much relation to what I've said.

If you think FSMs are split between multiple computers, we are not discussing the same thing.

--

  Rick C. 

  +-- Get 1,000 miles of free Supercharging 
  +-- Tesla referral code - https://ts.la/richard11209
Reply to
Rick C

That's because you have a limited viewpoint of how FSMs can be designed and implemented.

Since you don't even attempt to define a "clock" for such software FSMs, I'm glad you recognise that your statement that "If your FSM is in software, then by definition all outputs are synchronous." is largely misleading. Except in some special circumstances, of course.

Now, if you are only discussing a limited subset of FSM implementations, then it would be polite to indicate that in your posts. If you don't then it makes you look ignorant.

Correct.

That's because you have a limited view of what constitutes an FSM, and how they can be designed and implemented.

Such limitations in your experience have manifested themselves in other ways.

Reply to
Tom Gardner

onous.

see how your comments have much relation to what I've said.

You keep twisting things. I'm not trying to limit anything. I'm talking a bout what the definition of a Mealy FSM is. Mealy is the final word on tha t. Others seem to have read into his paper things that aren't there. It's that simple. I don't know why you can't seem to grasp the fact that Mealy was never talking about the sort of FSM that is now attributed to him.

I've already given you the clock for a software FSM. All you need to do is read what I wrote.

I am discussing exactly what I've been saying I'm talking about all along. You seem to be losing it here.

ussing the same thing.

No, I think a FSM is just that. A machine. If you want to call multiple F SM connected together a FSM, fine. Of course multiple FSM can be combined into one. I have no idea why you decided to branch out to this area other than to have something to argue about.

It's pretty clear you are no longer interested in discussing the original t opic, Mealy FSMs. Ok, I guess we are through then.

--

  Rick C. 

  +-+ Get 1,000 miles of free Supercharging 
  +-+ Tesla referral code - https://ts.la/richard11209
Reply to
Rick C

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.