Is UML fit for embedded work?

I had already asked the question many years ago and the responses were mixed.

We are currently part designing, part reengineering a big software project for the control of power stations. Since the previous release was made without proper design notation, the company has decided to use UML and a tool call Visual Paradigm to do the preliminary design. We won't generate the code.

One aspect that IMO is not satisfactory is that UML diagrams are not "naturally" fit for expressing requirements in a way that is consistent and complete enough to generate usable code and being able to reverse engineering. With the possible exception of state diagrams which can be translated into executable instruction while staying at a rather abstract point of view. Use cases, sequence diagrams, activity diagrams are approximate and leave too much room for interpretation or are cluttered with some detail in an obscure notation.

I have had a look at an European project called "Interested". They have used tools from different vendors and built the necessary interfaces. They can generate code for highly demanding applications such as avionics or nuclear plants. The problem is those tools are rather hard to use and can only be used where the generated code is relatively small.

It seems there is a need for tools for large projects with intermediate safety constraints that are not covered by the existing ones.

An article expressing my views on UML:

Reply to
Lanarcam
Loading thread data ...

(I am not a UML person)

If UML as a diagramming tool is not good enough, what would be ??

Reply to
hamilton

Short answer : SDL, SCADE.

Long answer: take a sequence diagram, you know that at some point in time, such module will call such function of another module but you can't express the logical flow. Expressing "for", "switch", "if" are impossible or cumbersome. You won't be able to deal with complex data sructures.

The only diagrams that are complete are state diagrams and flow charts. Flow charts were given up some 30 years ago for involved algorithms, pseudo code and code are more appropriate.

Take Use cases, how do you specify complex protocols between actors and the system? There is a recommendation that you should not use more than about 10 Use cases. What do you do when you have dozens of functions?

I am not *anti-UML*, I simply find that it is not the ultimate tool and that there should be alternatives, affordable and able to be expressive enough.

Reply to
Lanarcam

People have used UML for embedded work. People have even used Rhapsody for embedded work ( it's what it's designed for ).

So yes.

Is it perfect? No. There pretty much must be a need for the diagrams as a deliverable for it to be worth it, unless somehow it helps with a safety critical or life critical requirement.

-- Les Cargill

Reply to
Les Cargill

It is possible with UML (i.c.w. OCL), but cumbersome. Sequence diagrams are IMO only useful to illustrate a certain scenarios, not as a full specification. I feel the same (to a more or lesser degree) about many of the other UML diagram types; ok to clarify things and to give an overview but not really practical as full specification language that covers every corner case.

My experience with UML is that for non-trivial stuff you either end up with something that is easy to grasp but very incomplete, or, when you strive for completeness, you end up with a big complex mess real quickly that still isn't complete.

I've been using UML for well over a decade, and still have mixed feelings about it (and also about the UML tooling I have used). In my line of work most people I work with at customer sites know UML (but only the basics, e.g. very few are aware of the existence of OCL); which makes it useful to communicate designs and concepts. I don't find UML very useful as a specification language.

Reply to
Dombo

My feeling exactly.

That's the problem we face today. There is a large base code and some people want to reverse engineer it for the sake of documentation. They want to draw activity diagrams for each function some of which take

10 pages, I fear it will become a useless indecipherable mess. IMO an ideal tool would allow one to describe high level requirement and design ideas first and let people dig deeper into low level design incrementally without loosing the big picture. What we would need would be hierachical diagrams encompassing all steps from requirements to code generation and a navigation between those different steps. That's my letter to santa Claus.

There is a tool called SCADE that allows that kind of design but it is rather hard to use and doesn't scale well with big projects. Apart from that, it is possible to generate certified code that runs on flight computers.

Thanks for the input about OCL, I didn't know about it. From Wikipedia, one can read : "OCL and UML. OCL supplements UML by providing expressions that have neither the ambiguities of natural language nor the inherent difficulty of using complex mathematics. OCL is also a navigation language for graph-based models."

It looks promising.

Reply to
Lanarcam

I know about Rhapsody but unfortunately we don't use that tool, we use one which IMO targets management information systems. It knows about C++ but not about C which is still the language of choice for many embedded systems.

For safety critical systems, I would rather use a tool such as SCADE which allows one to generate certified code.

Reply to
Lanarcam

It's interesting. it can get pretty messy for full-scale systems.

C++ isn't that much of a burden these days. Only took what, fifteen years? :)

Yes, I agree.

-- Les Cargill

Reply to
Les Cargill

As a sequence diagram describes a scenario, "switch" has no place there. Also, dealing with complex data structures has no place on a sequence diagram, as they only describe which interfaces are being used. UML 2.1 contains structures for describing repetitive and conditional behaviour on sequence diagrams, which are IMHO not cumbersome.

SDL has message sequence charts, which are analogous to UML sequence diagrams. Is your criticism above not equally valid for SDL?

If you're talking about data transfer protocols: you shouldn't. Use cases are part of the analysis phase, you're not supposed to meddle with implementation details here. Otherwise, you can define sequence diagrams and statecharts that describe the protocol behaviour. You can add statecharts to actors for high-level simulation.

Decompose into subsystems, of course. Each subsystem will have its own set of use cases. For "big" systems like a power station, top-level use cases should be very abstract, like: "control power" (for workers) and "consume power" (for the grid).

If there existed an ultimate tool, it would be a very different world.

--
Gemaakt met Opera's revolutionaire e-mailprogramma:  
http://www.opera.com/mail/
 Click to see the full signature
Reply to
Boudewijn Dijkstra

1
n

Then you can't translate such a diagram into executable code. It is only illustrative which is not so bad from a documentation point of view but it lacks the fature of a complete tool.

SDL is a formal language, UML is semi formal whatever that means.

ses

ms

Data transfer protocols are part of what I was talking about. They are not IMO implementation details but parts of the specification.

e

SCADE is not far from it in its specialized area.

Thanks for your answer.

Reply to
Lanarcam

Op Thu, 12 Apr 2012 10:30:58 +0200 schreef Lanarcam :

A scenario (which an SD describes) is a way to investigate use cases, define interfaces and validate system behaviour. Although you can indicate object states, I don't think it was ever the intention that a set of SDs could be used to fully specify a state machine. During the development cycle, SDs might conflict, indicate impossibilities, show things that might better be done otherwise and make regression testing easier. So they are quite a bit more than just illustrative.

Some UML modeling tools annotate model elements so that it is perfectly clear which semantics apply. A language is never a complete solution, although a formal language may make certain things easier.

Can you give an example where the specific protocol matters for use case modelling?

--
Gemaakt met Opera's revolutionaire e-mailprogramma:  
http://www.opera.com/mail/
 Click to see the full signature
Reply to
Boudewijn Dijkstra

ou

e.

2.1
r
t

I won't give an example about Use cases particularly but about specifications. Given a list of incoming or outgoing messages between the system and another system, you can map functions that will deal with thoses messages. Without being able to dig into the (applicative) messages, you won't be able to draw any sufficient detail to perform a valid functional analysis. I have worked extensively with SCADA and data acquistion systems and the protocols were part of the specification. How can you express that with Use cases?

Reply to
Lanarcam

Op Thu, 12 Apr 2012 13:10:42 +0200 schreef Lanarcam :

Why would you want to express that with use cases? It is of no concern during the analysis phase which messages are coming in or going out. Use case modelling deals with more abstract concepts like achieving goals, so they are not suitable for specification modelling. However, you can always use other diagrams to model the interface (and even crude behaviour) of an actor as if it were a (sub)system. Then during the design phase you can replace the actor by the driver interface that sends and receives the messages.

--
Gemaakt met Opera's revolutionaire e-mailprogramma:  
http://www.opera.com/mail/
 Click to see the full signature
Reply to
Boudewijn Dijkstra

rs

Use

d

re

se

se

So, I suppose you disagree with this:

"Does a use case differ from a functional specification? You can employ use cases to model business processes, a system=92s functional requirements, or even the internal workings of a system. When used to model functional requirements, a use case describes one function required of your system or application. As such, your use cases constitute a functional specification."

Reply to
Lanarcam

Before one decides to go down that road, one should not only ask him-/herself if one is willing to spend the effort to write the initial documentation, but also if one is willing, can afford _and_ has the discipline to spend the (even larger) effort to maintain the documentation.

The more detailed documentation gets the harder it is to keep it up to date. Detailed documentation that is not kept up to date is worse than useless; it wastes both the time of the one who wrote it and the one(s) who reads it.

One of the clients I have worked for had the ambition to document their software with a very high level of detail. At a certain point they had more than 1100 documents (no not pages!) describing their software, where each document typically consisted of somewhere between 20 and 80 pages (the standard document template alone accounted for 12 pages). Judging by the directory structure and templates that was only a small fraction of the documents they intended to write at the start of the project. Though this was a large project (several MLOC), this was way over the top and actually counterproductive. You never knew if a document was up-to-date. Often engineers making changes to software weren't even aware that there were one or more documents that should be updated as a consequence of the changes made to the code. Most documents were write-only; if you needed to know the details it was both quicker and more reliable to look it up in the actual code.

When it comes to documentation I prefer to document the high level structure, interfaces, rules and concepts of the software, and the rationale behind design choices (especially if they are not too obvious). The high level stuff rarely changes and cannot be captured (well) by reverse engineering and automatic documentation generation tools.

For documentation of low level details I prefer to use tools like Doxygen (i.c.w. Graphviz) which generates documentation from the code itself and (tagged) comments embedded in the code. Though tools like Doxygen have limitations and shortcomings, my experience is the documentation it generates is much more accurate than a manually maintained document describing things like call-graphs, dependencies, function parameters...etc.

That it is on the top of my wish list too. Several UML tools have promised this for years. However actually getting it to work this way in real life is whole other story.

Buying requirements management and/or modeling tooling is one thing; deploying and embedding it in the organization is quite another thing (and much harder). I have seen too many times potentially useful tooling fail to realize their potential, simply because only one or two motivated people actively used the tool while others continued doing their own thing. The best chance is at the start of a project; it would be very hard to introduce tools like this late in the project.

That is a pity, but unfortunately quite common with modeling languages and -tools. Most are fine with for trivial problems, but cannot handle large projects well, if at all. Ironically those are ones when the need is the most.

Reply to
Dombo

Op Thu, 12 Apr 2012 15:28:33 +0200 schreef Lanarcam = :

=

ith

e

add

ey =

n.

e =

d
t

s.

rn

=

, =

No, but we could disagree on the interpretation. I believe that by =

"model" they not only mean "draw things" but also to properly fill out t= he =

textual description using appropriate fields. This way you can give a =

place to every piece of the specification.

--
Gemaakt met Opera's revolutionaire e-mailprogramma:
http://www.opera.com/mail/
 Click to see the full signature
Reply to
Boudewijn Dijkstra

With UML, just as with anything else, the real question is return on investment (ROI). To be truly successful, the benefits of a method must outweigh the learning curve, the tools, the maintenance costs, the hidden costs of "fighting the tool" and so on.

As it turns out, the ROI of UML is lousy unless the models are used to generate substantial portions of the production code. Without code generation, the models inevitably fall behind and become more of a liability than an asset. In this respect I tend to agree with the "UML Modeling Maturity Index (UMMI)", invented by Bruce Douglass

formatting link
According to the UMMI, without code generation UML can reach at most

30% of its potential. This is just too low to outweigh all the costs.

Unfortunately, code generation capabilities have been always associated with complex, expensive UML tools with a very steep learning curve and a price tag to match. With such a big investment side of the ROI equation, it's quite difficult to reach sufficient return. Consequently, all too often big tools get abandoned and if they continue to be used at all, they end up as overpriced drawing packages.

So, to follow my purely economic argument, unless we make the investment part of the ROI equation low enough, without reducing the returns too much, UML has no chance. On the other hand, if we could achieve positive ROI (something like 80% of benefits for 10% of the cost), we would have a *game changer*.

To this end, when you look closer, the biggest "bang for the buck" in UML with respect to embedded code generation are: (1) an embedded real- time framework and (2) support for hierarchical state machines (UML statecharts). Of course, these two ingredients work best together and need each other. State machines can't operate in vacuum and need a framework to provide execution context, thread-safe event passing, event queueing, etc. Framework benefits from state machines for structure and code generation capabilities.

I'm not sure if many people realize the critical importance of a framework, but a good framework is in many ways even more valuable than the tool itself, because the framework is the big enabler of architectural reuse, testability, traceability, and code generation to name just a few. The second component are state machines, but again I'm not sure if everybody realizes the importance of state nesting. Without support for state hierarchy, traditional "flat" state machines suffer from the phenomenon known as "state-transition explosion", which renders them unusable for real-life problems.

As it turns out, the two critical ingredients for code generation can be had with much lower investment than traditionally thought. An event- driven, real-time framework can be as complex as a traditional bare- bones RTOS (e.g., see the family of the open source QP frameworks at

formatting link
A UML modeling tool for creating hierarchical state machines and production code generation can be free and can be designed to minimize the problem of "fighting the tool" (see
formatting link
Sure, you don't get all the bells and whistles of IBM Rhapsody, but you get the arguably most valuable ingredients. Most importantly, you have a chance to achieve a positive ROI on your first project. As I said, this to me is game changing.

Can a lightweight framework like QP and the QM modeling tool scale to really big projects? Well, I've seen it used for tens of KLOC-size projects by big, distributed teams and I haven't seen any signs of over-stressing the architecture or the tool.

Reply to
Miro Samek

Le 17/04/2012 19:05, Miro Samek a écrit :

formatting link

Interesting thoughts. I'll have a look at

formatting link
and
formatting link

Reply to
Lanarcam

[...]

Miro, isn't the number of 'active objetcs' in QP a factor that limits scaling (maximum number = 64, according to the website)?

I have in mind (real-word) projects with hundreds of objects, belonging to dozens of different classes. Doesn't every object with a state machine need to be an active object?

--
Saludos.
Ignacio G.T.
Reply to
Ignacio G.T.

I'm glad you asked, because it is important to distinguish between an active object and just a state machine.

An active object is an "object running in its own thread of execution". In other words: active_object =3D state_machine + thread + event_queue. So, while an active object is a hierarchical state machine it also has a thread, event queue. The QP framework limits the number of such active objects to 63.

But it does not mean that your system is limited to just 63 state machines. In fact, each active object can manage an open-ended number of lightweight hierarchical state machines as "Orthogonal Components" (see

formatting link
pdf). For instance, the "Fly 'n' Shoot" game example described in the PSiCC2 book as well as in the QP tutorials, has a pool of 10 mines (5 small mines and 5 big and nasty mines). The mines are "Orthogonal Component" state machines managed by the Tunnel active object, but they are not full-blown active objects.

The point is that in larger projects, you very often have a need for pools of stateful components, such as transactions, client connections, etc., all of them being natural state machines with their own life-cycle. Implementing all these components as threads, as it is often done in traditional threaded applications, doesn't actually scale that well, because threads are very expensive. So, just a few hundred of threads can pull a most powerful machine to its knees. In contrast, lightweight state machine components take orders of magnitude less resources (a hierarchical state machine in QP takes only 1 function pointer in RAM, plus a virtual pointer in C++). So, you can easily manage hundreds or thousands of those.

The bottom line is that the efficiency of the implementation in QP actually scales better than traditional RTOS/OS-based approaches and enables building bigger applications.

Reply to
Miro Samek

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.