(Semi-) formal methods

Do you have a question? Post it now! No Registration Necessary

Translate This Thread From English to

Threaded View
How prevalent are (semi-) formal design methods employed?
Which?

[I don't have first-hand knowledge of *anyone* using them]

Re: (Semi-) formal methods
Quoted text here. Click to load it

We use theorem provers to find bugs in ISA specification:
https://www.cl.cam.ac.uk/~pes20/sail/

They're quite handy for finding bugs before they hit silicon...

Theo

Re: (Semi-) formal methods
On 5/13/2021 7:25 AM, Theo wrote:
Quoted text here. Click to load it

But, presumably, only of value if you're a SoC integrator?

I.e., given COTS devices, what might they reveal to users of
said devices?


Re: (Semi-) formal methods
Quoted text here. Click to load it

They can reveal bugs in existing implementations - where they don't meet the
spec and bad behaviour can result.

However CPU and FPGA design is what we do so that's where we focus our
efforts.  Depends whether FPGA counts as COTS or not...

Theo

Re: (Semi-) formal methods
On 5/14/2021 2:43 AM, Theo wrote:
Quoted text here. Click to load it

Understood.  Tools fit the application domains for which they were designed.

How did *adoption* of the tool come to pass?  Was it "mandated" by corporate
policy?  Something <someone> stumbled on, played with and then "pitched"
to management/peers?  Mandated by your industry?  etc.

[Just because a tool "makes sense" -- logically or economically -- doesn't
mean it will be adopted, much less *embraced*!]

Re: (Semi-) formal methods
On 14/05/2021 20:37, Don Y wrote:
Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it

Quoted text here. Click to load it


Quoted text here. Click to load it

It became a must have tool about 2-3 decades ago for the safety  
critical/ avionics/medical industries. Design where becoming so complex  
that simulation could no longer answers questions like, is dead-lock or  
life-lock possible on our statemachine, can you buffers overflow, do you  
have arithmetic overflow, deadcode, race condition etc. The tools are  
now well established and most of the above questions can be answered  
(with some user constraints) by a simple push button tool. They are  
still expensive (you won't get much change from 20K UK pounds) but most  
high-end FPGA/ASIC companies use them. They are not a replacement for  
simulation but one of the tools you need to complete your verification.

Regards,
Hans
www.ht-lab.com


Quoted text here. Click to load it


Re: (Semi-) formal methods
On 5/15/2021 12:44 AM, HT-Lab wrote:
Quoted text here. Click to load it

But these are industries with inherently high levels of overhead -- possibly
suggesting (market) "efficiency" (in other industries) as a reason that
discourages adoption.

So, if it/they have value *there*, why aren't it/they embraced EVERYWHERE?
Obviously, other product offerings in other industries face similar design
problems...

Quoted text here. Click to load it


Re: (Semi-) formal methods
Quoted text here. Click to load it

We're a university research lab.  We developed the tool in response to two
trends:

- growing complexity of systems and the increasing prevalence of bugs in
implementation (for example in memory coherency subsystems).

- proposing security extensions to architectures and wanting to be able to
show that what we've proposed doesn't have any loopholes in it.  It provides
confidence to us and to people adopting the technology that the technology
is robust, and there won't be problems once silicon is deployed and
impossible to fix later.

(that's not to say there won't be entirely new classes of attacks coming out
of left-field in the way that Spectre surprised a lot of people, but are at
least trying to reason about the attacks we know about)

Because the costs of respinning are so high, chip design is all about
verification.  Doing it in a formal sense is a step up from hand-writing or
randomly-generating tests.  I don't think the industry needs convincing that
it's a good idea in abstract - it's mainly making the benefits outweigh the
costs.  A lot of the work (eg in the RISC-V community) is about bringing
down those costs.

Theo

Re: (Semi-) formal methods
On 5/15/2021 4:02 AM, Theo wrote:
Quoted text here. Click to load it

OK.  Both make sense.  And, both are hard (if not impossible) to "fix"
at layers *above* the hardware.

The next, most practical, question is:  how do you encourage its adoption?
Publishing papers is what professors/grad students "do".  That's
different from actually getting folks to *use* something that you've
developed/written about.

(a casual examination of the amount of "stuff" that has come out of
academia and "gone nowhere" -- despite some value! -- should make
this evident)

Quoted text here. Click to load it

Understood.  In the past, I've had to rely on "good eyes" to spot problems
in designs.  Folks tend to only test how they *think* the design SHOULD
perform.  So, they often omit checking for things that they don't imagine
it (mis?)doing.

<do something; witness crash>
"What did you just do?"
"This..."
"You're not supposed to do that!"
"Then why did you/it LET ME?  Did I break some *law*???"

[Thankfully, I have a nack for identifying MY "assumptions" and the
liabilities that they bring to a design]

A suitably aggressive tool can avoid this bias and just hammer at every
nit it can algorithmically deduce/exploit.

A colleague designed a video scaler for a printer.  When I started poking at
it with real numbers, I discovered cases where the PHYSICAL image size
produced on a B-size page was actually SMALLER than that produced on an
A-size page (presumably, you'd print on larger paper to get a larger image!).

Ooops!

Given the number of variations in how the interface could be configured,
he'd never have been able to exhaustively test all cases.  So, my observation
was just serendipitous.

OTOH, *seeing* what I'd uncovered gave him cause to look more carefully
at his implementation *before* sending it off to fab (which would have cost
money AND calendar time -- leading to a delayed market introduction).

Re: (Semi-) formal methods
On 5/11/2021 9:25 PM, Don Y wrote:
Quoted text here. Click to load it

https://tinyurl.com/thbjw5j4

Re: (Semi-) formal methods
On 5/13/2021 4:32 PM, Dave Nadler wrote:
Quoted text here. Click to load it

Hardly formal -- no cufflinks, watchfob, cumberbund nor tails!  :>

Re: (Semi-) formal methods
Quoted text here. Click to load it

Wrong picture, try the one on this page:

   https://web.stanford.edu/~engler/

Look at the articles linked from there too ;-).

Re: (Semi-) formal methods
On 12/5/21 11:25 am, Don Y wrote:
Quoted text here. Click to load it

Don, IDK if you know about TLA+, but there is a growing community using  
it. It is specifically good at finding errors in protocol (== API)  
designs (because "TL" means "Temporal Logic"). I haven't used it so  
can't really answer many questions, but I have been following the  
mailing list for some time and greatly admire some of the excellent folk  
are are using it.

<https://learntla.com/introduction/

Clifford Heath.

Re: (Semi-) formal methods
On 5/14/2021 9:15 PM, Clifford Heath wrote:
Quoted text here. Click to load it

My query was more intended to see how *commonplace* such approaches are.
There are (and have been) many "great ideas" but, from my vantage point,
I don't see much by way of *adoption*.  (Note your own experience with TLA!)

[The "(Semi-)" was an accommodation for *individuals* who may be
using such things even though their work environment doesn't]

So, you either conclude that the methods are all "hype" (not likely),
*or*, there is some inherent resistance to their adoption.  Price?
(Process) overhead?  NIH?  Scale?  Education?  <shrug>

[Note my followup question to Theo as to how *he/they* ended up with
their tool/process]

There seem to be many "lost opportunities" (?) for tools, techniques,
processes, etc.  I'm just curious as to *why* (or, why *not*).
Or, said another way, what does a tool/process have to *do* in
order to overcome this "resistance"?

It's as if a (professional) writer wouldn't avail himself of a
spell-checker...  Or, a layout guy not running DRCs...  (yes,
I realize this to be an oversimplification; the examples I've
given are just mouse clicks!)

Re: (Semi-) formal methods
On 5/14/2021 9:52 PM, Don Y wrote:
Quoted text here. Click to load it

By "do", I mean in the colloquial sense, not a specific feature
set, etc.

I.e., "It has to make my dinner and wash the dishes in order
for me to consider it worth embracing"  (or, "It has to cut
25% of the development cost from a project")

Re: (Semi-) formal methods
On 15/5/21 2:52 pm, Don Y wrote:
Quoted text here. Click to load it

Quoted text here. Click to load it

My own experience is irrelevant, as I was semi-retired when I first came  
across it. On the other hand, the reason I came across it was I received  
a message from Chris Newcombe (admiring my related work), whose success  
in using it to find a potential failure in DynamoDB that could have  
knocked *Amazon* off the air was a stimulus to many *many* folk learning  
TLA+.


Quoted text here. Click to load it



For software folk at least, it requires a very different way of  
thinking. The same problem I had promulgating fact-based modeling: both  
address a massive *blind spot* in developer's consciousness.

Specifically we are unable to consciously detect when there is a failure  
in our logic; because to be conscious of the failure it would have to be  
*not present*. That is, we can only know such things in hindsight, or  
when we deliberately apply specific methods to check our logic. But why  
would we do that when it is "apparent" that our logic is correct?

Quoted text here. Click to load it

Quoted text here. Click to load it

These are not the same at all, because those things have rules. There is  
no rule for correct logic.

Clifford Heath

Re: (Semi-) formal methods
On 5/15/2021 1:14 AM, Clifford Heath wrote:

Quoted text here. Click to load it

Logic is only "correct" if you are applying a prover.

You can still use formal methods in things like specifications -- where
there is no "proof" implied.  The advantage being that everyone can
unambiguously understand the intent of the specification without lots
of (verbose) "legalese".

E.g., I did my initial design with OCL as the means by which I conveyed
my intent to my colleagues.  It wasn't that much of a "lift" for them
to learn the representation well enough to ask pertinent questions and
"challenge" the implementation.  And, didn't resort to lots of mathematical
abstraction to make those points.

Unfortunately, use in such a document is not suited for "general audiences"
because it lacks rationale for each item in the specification.  (and,
relies on some semi-ubiquitous usage to ensure readers CAN read it)

OTOH, if you're writing the code (or reading it), such documents add
further texture to what you're seeing (in the programming language).
Another set of comments, so to speak.  Or, a roadmap.

The alternative is: an /ad hoc/ specification (with some likely incompletely
specified set of loose rules) *or* an *absent* specification.  Each of these
leave gaping holes in the design that (supposedly) follows.

Again, why the resistance to adopting such a "codified" approach?
There's no capital outlay required to adopt a *methodology* (unless
you want/need tools).  It's as if the effort is seen as an *additional*
effort -- but with no perception of a "return".  Is this because the
"return" doesn't stand out and have flashing lights surrounding it?

Re: (Semi-) formal methods
On 15/5/21 7:10 pm, Don Y wrote:
Quoted text here. Click to load it

Quoted text here. Click to load it


I was loose with terminology. People tend to think that their  
"reasoning" is correct and doesn't need to be logically analysed or  
proved. They're wrong, but the blind spot is unavoidable.

Of course, there's also the problem that a thing may be "proved" yet be  
(undetectably, until it becomes detectable) not what would have been  
wanted - if it had been possible to foresee the failure mode.


Quoted text here. Click to load it

That's how I used it, for static modelling (modelling all possible  
"states of the world" as they may exist at a point in time). Although  
dynamic modelling is much more exciting, it is rarely difficult once the  
correct static model has been agreed.

<http://dataconstellation.com/ActiveFacts/CQLIntroduction.html

Quoted text here. Click to load it

The goal of CQL is to make the formal model suitable for (and expressed  
in the language of) anyone generally familiar with the domain being  
modelled.

Quoted text here. Click to load it

Actually rationale is seldom needed. What is needed is an example of the  
scenario that is allowed or disallowed by each definition. The example  
is almost always an adequate rationalisation.
Quoted text here. Click to load it

Quoted text here. Click to load it

That's precisely true. That's why analytic (formal) models are needed.

Quoted text here. Click to load it

Hubris, mostly. People genuinely don't see the need until enough  
experience has humbled them, and by then, their accumulated caution and  
tentativeness mean their industry sees them as dinosaurs.

Clifford Heath.

Re: (Semi-) formal methods
On 5/15/2021 3:54 AM, Clifford Heath wrote:
Quoted text here. Click to load it

I opted for OCL as my system is object-based, UML is relatively easy to
understand and the way things would be expressed closely mimics the
way those constraints would be imposed in the code.  Any "translation"
effort would introduce other opportunities for errors to creep in.

Quoted text here. Click to load it

I disagree.  I find many cases where I need to resort to prose to
explain why THIS implementation choice is better than THAT.  Just
*stating* (or implying) that it is leaves too much to the Reader,
as an "exercise" ("Now WHY, exactly, would this be a better approach?")

For example, I had to make a decision, early on, as to whether or
not "communications" would be synchronous or asynchronous.  There
are advantages and drawbacks to each approach.  From a performance
standpoint, one can argue that async wins.  But, from a cognitive
standpoint, sync is considerably easier for "users" (developers)
to "get right"; message and reply are temporally bound together
so the user doesn't have to, *later*, sort out which message a
subsequent reply correlates with (icky sentence structure but it's
early in the morning  :< )

The fact that comms then become blocking operations means any
desire for concurrency has to be addressed with other mechanisms
(e.g., set up a thread to do the comms so the "main" thread can
keep working and make the coordination between these two -- which
now appear asynchronous -- to be more visible.)

Or, trying to explain the many ways a particular comm can fail
(e.g., what if the party on the other end never listens?
Or, *can't* listen?  Or, the processor hosting it powers down?
Or...) and how the response/detection of that failure can
vary based on where in the "failure" it occurs.

Quoted text here. Click to load it

I'm not sure the models need to be mechanically verifiable.  What *needs*
to happen is they need to be unambiguous and comprehensive.  You should
be able to look at one (before or after the code is written) and convince
yourself that it addresses every contingency -- as well as HOW it does so
(to the appearance of other actors)

Quoted text here. Click to load it

"Hubris" suggests overconfidence (in their abilities).  I'm not sure
that's the case.

I started looking for a "common ground" in which I could express my current
design when I "discovered" that, not only is there no such thing, but that
there isn't even a common approach (methodology) to design.  It's not
a question of whether method A or method B is the more common (and, thus,
more readily recognizable to adopt in my presentation) but, that NONE OF
THE ABOVE is the clear winner!

[I couldn't even get a concensus on how to *diagram* relationships
between actors/objects just for illustrative purposes!]

This prompted the question, here.

I've been looking back over the past experiences I've had with getting
folks to "move forward" to try to see if I can identify the issue that
leads to this resistance.

Early on, I can argue that a lack of understanding for the (software)
design process led some employers to skimp in ways that they might
not have realized.  E.g., when a development seat cost many thousands of
1975 dollars, you could see how an employer, new to using MPUs *in*
their products, could decide to "share" a single seat among several
developers.

Similarly, when ICEs came on the scene.

And, HLLs.

But, those issues are largely behind us as there are many "free"/cheap
tools that can make these sorts of decisions "no brainers".

I can also see how scheduling pressures could lead to a resistance
to adopt new methods that *claim* to improve productivity.  I've
never been in an environment where time was allotted to explore
and learn new tools and techniques; there's another product that's
anxious to make its way to manufacturing (and YOU don't want to
be the one responsible for stalling the production line!)

I can see how fear/uncertainty can lead individuals (and organizations
as organizations are, at their heart, just individuals) to resist
change; the process you know (regardless of how bad it might be) is
safer than the process yet to BE known!

For things like specification and modeling, I can see a preception of
it as being a duplication of effort -- esp the more formally those
are expressed.

And, for some, I think an amazing lack of curiosity can explain their
clinging to The Way We Always Did It.  Or, the lack of "compensation"
for the "risk" they may be taking ("Heads, you win; tails, I lose!")

*Most* of these arguments can be rationalized by "employees" -- the
excuse that THEY (personally) don't have any control over THEIR
work environment.  OK, I'll avoid getting into *that* argument...

But, for folks working in more permissive environments (e.g.,
independent contractors), most of the arguments crumble.  YOU can
make the time to learn a technique/tool; YOU can buy the tool;
YOU can gauge its impact on YOUR productivity; etc.

I have a lot of respect for my colleagues -- they've all got
proven track records with significant projects.  Yet, they still
fall into this pattern of behavior -- clinging to past processes
instead of exploring new.  To their credit, they will make an
effort to understand the approaches and technologies that *I*
pursue -- but, that's more the result of a personal relationship
(than a BUSINESS one).  Yet, I never hear any epiphanies where
they exclaim "this is SO COOL"...  is the pull of the familiar SO
overwhelming?

[I left the 9-to-5 world primarily so I could dabble in other
fields, techniques, etc.  I have no desire to make "model 2",
especially if it will follow "process 1"!]

Re: (Semi-) formal methods
On 16/5/21 5:26 am, Don Y wrote:
Quoted text here. Click to load it

Quoted text here. Click to load it

It's very difficult to teach non-programmers to read OCL. But I agree on  
the need for code generation directly from the model, without  
translation. That's what CQL does too.

Quoted text here. Click to load it

Quoted text here. Click to load it

Yes, where that is needed, CQL provides context-notes too. In fact I  
select four specific categories of rationale ("so that", "because", "as  
opposed to" and "to avoid"), with optional annotations saying who agreed  
and when.

Quoted text here. Click to load it



Quoted text here. Click to load it


Yes, I'm well familiar with that problem. Best decade of my working life  
was building a development tool that exclusively used message passing.
For the most part it's amazingly liberating, but sometimes frustrating.

Quoted text here. Click to load it

Even without verification, the model must only make rules that are *in  
principle* machine-verifiable. If a rule is not verifiable, it's just  
waffle with no single logical meaning. If it has a single logical  
meaning, it is in principle machine-verifiable.


Quoted text here. Click to load it

Comprehensive is impossible. There's always a possibility for more  
detail in any real-world system. But unambiguous? Yes, that requires  
that it is a formal part of a formal system which allows its meaning to  
be definitively stated. That is, for any scenario, there exists a  
decision procedure which can determine whether that scenario is allowed  
or is disallowed (this is what is meant when a logical system is termed  
"decidable").


Quoted text here. Click to load it

It's a great goal, but it is *in principle* impossible. Correctness of  
every contingency requires that the model matches the "real world", and  
Godel's theorem shows that it's impossible to prove that.


Quoted text here. Click to load it

Quoted text here. Click to load it

Fact-based modelling uses language as an access point to people's mental  
models, by analysing "plausible utterances" or "speech acts" for their  
logical intent, and building a formal model that captures the domain  
*using their own terms and phrases*.

Poor though it is, there exists no better tool than natural language to  
explore and express common ground. CQL provides a two-way bridge between  
that and formal logic, so that any mathematically formal statement can  
be unambiguously expressed using natural sentences, and every fact in  
the domain can be expressed using at least one agreed sentence using a  
restricted natural grammar that is also mathematically formal (meaning,  
unambiguously parseable to a logical expression).

Quoted text here. Click to load it

Each model needs to be formulated by agreement in each case. The only  
way to reach agreement is to express scenarios and formalise ways of  
expressing them, so that any acceptable statement can be analysed for  
its logical intent.

This works because every functioning business already has ways to talk  
about everything that matters to it. Fact-based modeling captures those  
expressions and formalises them, using their own words to express the  
result, so agreement can be reached. It's little use to formalise a rule  
in a way that cannot be verified by the people who proposed it - one  
cannot reach agreement that way. Many MANY development failures fall  
into the trap of "but you said... no but I meant....!", or "I didn't say  
that because it's just common sense! What kind of fool are you?"

Quoted text here. Click to load it

"Every man's way is right in his own eyes" - Proverbs 21:2

People can't see the flaws in their own logic, because their logic is  
flawed. They resist methodical attempts to correct them, because they're  
already "right".

Quoted text here. Click to load it

I know a number of companies that implement "10% time", for employees to  
explore any technology or personal projects they feel might be relevant  
to the business (or to their ability to contribute to it). I think  
Google is one of these, in fact, though my examples are closer to home.

Quoted text here. Click to load it

Unfortunately a lot of companies view testing in the same way. As if it  
wasn't possible for them to make a mistake.

Quoted text here. Click to load it

Quoted text here. Click to load it

Folk who have been "lucky" a few times tend to become the "golden child"  
and get promoted. Once in senior positions they're much more likely to  
reject techniques which could discover that "the emperor has no clothes"

Quoted text here. Click to load it

Quoted text here. Click to load it

I look forward to hearing of your experiences with TLA+, Alloy, or CQL.
I promise that it will be worth your effort.

Clifford Heath.

Site Timeline