Open-Source SCPI / IEEE 488.2 Parser

Is anyone aware of =96 or interested in =96 an open-source SCPI parser tool= kit? There's a vast quantity of software (and people) that can talk to SCP= I-aware instrumentation, but relatively little for implementing these thing= s to talk to ... not even for implementing the parts that are supposed to b= e universal.

My past searches spotted an occasional request every couple of years. Thes= e have led to several dead-ends and papers hinting that free implementation= s might exist, but no code yet that's very useful for building this standar= d, text-based interface into embedded devices.

In general, I'm interested in resource- and cost-efficient ways of generati= ng standards-based interfaces that people can operate manually (via tty or = a GUI layer), that other machines can operate automatically, that are trans= port-neutral, and that can expand up or strip down without a lot of rework.= Having developed some of my own code to meet these kinds of goals, I'm no= w mulling what's next. That might mean open-sourcing at least the parser-r= elated parts I've written, or I might look into contributing to an existing= open-source effort, if anyone can suggest one that aligns well.

If you know of anything you consider useful, please consider sharing for th= e benefit those who will surely repeat this search in the future!

Should you have an interest but are unable to contribute something helpful = right at this moment, or if you'd rather not post here, I'd still be happy = to hear from you directly about this subject.

< Lionel
Reply to
Lionel D. Hummel (Hummel Automation)
Loading thread data ...

I don't think I would be able to help. but yes I have looked for this too.

--

John Devereux
Reply to
John Devereux

As have I. I saw a couple of, I believe LEX/YACC based, paid options out there that weren't too abusive.

--
Rob Gaddi, Highland Technology
Email address is currently out of order
Reply to
Rob Gaddi

I've also written such a query/command parser/interpreter for instruments and also have looked, perhaps a year back or so, to see what others have done here and may have released. I could also consider releasing the code I've written, but there would be plenty of work involved in making it useful and well documented and, besides, making it more generally useful. After looking, I took the impression that either I have too limited a perspective or else few are actually attempting to implement 488.2 in their products and so few really care. I could contribute some time, though. And I do own the code I did write, though I'd probably want to modify it now or else abandon it for better.

The front end that the instrument user would see is clearer to me in such a project, since that is exactly what the standard addresses. The back end standard for a general library that ties into the guts of an application in an instrument, less so. Perhaps you could help clarify your vision on that part.

Jon

Reply to
Jon Kirwan

"My past searches spotted an occasional request every couple of years. These have led to several dead-ends and papers hinting that free implementations might exist, but no code yet that's very useful for building this standard, text-based interface into embedded devices."

You're right. In 2002 the author of a for-sale SCPI parser announced his product on this newsgroup:

formatting link

The link in the article is now defunct. Use this link instead:

formatting link

The price is $700 which sounds like a bargain if you're in the biz of creating a commercial SCPI device.

If you buy it, please let us know how you like it.

JJS

Reply to
John Speth

Hmm. Thanks for the reference. Interesting to see what else was done to make a product out of a collection of routines. My own code appears to be organized about the same -- simple table of ASCII commands with function pointers allows almost trivial expansion to add/modify commands/queries and each function has a clearly defined interface with clearly defined support functions for additional processing the rest of the command in standardized 488.2 fashion.

The value is in the included specialized templates, documentation, and after-sale support, I suppose.

This answered my question about standardizing the back side. It's nothing remarkable. Just what I'd cobbled up on my own without much novel insight.

And thanks for the link. Nice to know.

Jon

Reply to
Jon Kirwan

an open source alternative would be a good mini online project to start

Reply to
bigbrownbeastie

Let me explain the vision you're asking for by modifying the front-end/back-end description a little and instead discuss the slightly different architectural narrative that I implemented:

The input to the parser is a buffer containing a program message.

The real "front-end" in this case is an external design element in front of the parser that fills the buffer, protects against input overrun, etc. In fact, the front-end - along with the back-end - looks more complicated to provide a general implmentation for than than the SCPI-related elements in and around the parser itself. VXI, GPIB, a "shell", a test harness - you name it - could be the front-end that feeds the parser. The only really essential front-end piece to any code distribution would be a test harness of some sort.

The output of the parser is a "command vector", representing the sequence of program message units that the parser decoded from the input buffer.

How the "guts of an application" go about dispatching command vectors should not concern the parser terribly much, as long as the parser and the other parts of the device have compatible views regarding the contents and semantics of this command vector.

A simple, sequential device might handle each program message unit via nothing more than a synchronous call-back. A device that supports overlapping commands would more likely implement a system of work queues, asynchronous events, and so forth. That starts us down a road that is more OS- and platform-specific, proprietary, and I think would take a lot more effort to keep contributors happy on a small project. In the absence of one-size-fits-all answers for dispatch and many of these other innards, I don't see doing too much more for the "back-end" other than agreeing on a helpful generalization of the command vector interfaces.

Useful collaboration doesn't have to completely end with the parser. Additional ways a toolkit might reduce the effort to implement a compliant device interface include: Helping structure and manage the standard's register and other user-visible data models, formatting the device's response messages, auto-generating tree code and compliance docs from more simplified representations, and offering sample stubs for mandatory commands.

< Lionel
Reply to
Lionel

Among potential participants, are there any opinions on licenses (GPLv3 + LGPL, or ?) to encourage the most usage and best contributions? Licensing would need to address statically linked modules, runnable code and docs that are auto-generated from user- supplied descriptions, examples and templates, and host-runnable utilities and tests (some based on examples from real-world devices). When considering a preferred answer to questions like this, I recommend Van Lindberg's classic "Intellectual Property and Open Source" along with its suggested references, though my ear is open for any updated information out there.

How about favorite repositories for embedded-type projects (SourceForge, GitHub, Google Code, or ...)?

Reply to
Lionel D. Hummel (Hummel Automation)
[grrr... no line breaks so I'll just clip your lines at ~80 chars]

I've been exploring techniques for implementing comm protocols in more "mechanical" ways. The obvious first approach being lex/yacc. (I've been implementing different protocols in different ways in an attempt to develop criteria best apply to each approach -- with hard numbers instead of "folklore" to back up each assessment)

The problem that I've encountered is the "mechanisms" -- to be general purpose -- tend to be resource hungry and hard to characterize (time and space). Of course, my ideas of "{resource,cost}-efficient may differ from yours as well as the environment(s) in which the code is intended to operate.

E.g., lex and yacc like to allocate large buffers -- though you could tweak this *if* you understand the time-space tradeoff and the details of your particular grammar; but, you still end up dealing with malloc() and it's ilk -- and the consequences that might have in your environment (single heap? multithreaded implementation? etc.)

As with any project, I think you have to clearly state the design goals and criteria in a manner that makes it easy for folks (during the design effort as well as evaluating your work for their later use) to determine how appropriate it would be. Adding an interface to an entry-level DMM probably has very different design goals/criteria than for a high-end DSO :-(

I would suggest an approach that lets users opt for whatever subsets of the interface they want to support (e.g., even legacy/obsolescent verbs, etc.) so that they don't end up dismissing the package as "too bloated".

And, of course, a means for extending it for vendor-specific verbs and nouns (along with a way to evaluate the costs of those extensions)

Reply to
D Yuniskis

Eh, sorry. First time using Google Groups to post to USENET. Suddenly I know what it must have felt like to be on AOL twenty years ago. Blush.

oolkit?

I must not be understanding you, because I missed the point of using lex/ya= cc to explore comm protocol implementations. Most of the challenge I've encountered in protocols comes from their distributed dynamics, which bring= s to mind whole different classes of tools. Actually, even among parser generat= ors, as soon as I discovered ANTLRworks, it surpassed lex/yacc as my ideal for parser-related explorations.

Given what you say, could there be better general-purpose mechanisms for the situation? Generalization can conflict with optimization, but often, it's key. Examples shouldn't be very hard to identify.

Also, as I think you imply, tools which help characterize and apply the winning optimizations are more beneficial in skilled hands than is something cranky/inflexible that out-of-the-box gets you close but in the end no cigar.

One thing I wish for is an open-source parser-generator for embedded that strives to be great at optimizing in ways that are important for resource-constrained designs.

No project has been formed yet. I've been exchanging real and potential specifics with those who contact me directly, and meanwhile shoring up a reasonable basis for kicking one off.

To all who are potentially interested in an open-source SCPI parser toolkit project: Please suggest any goals or criteria upon which your own participation would hinge.

In SCPI, there is much like this that is designated as optional. Other parts of IEEE-488.2 are excluded for the sake of a simpler implementation and more predictable behavior. I do think it is better to arrange something small and responsive, with pieces you could see adding, than something big that leaves you wondering what would remain after trying to understand and gut what you don't suspect you need. Also along these lines, I'd look for something very easy to tailor beneficially yet correctly per the specification, and not encourage a proliferation of non-conforming implementations.

I appreciate your comments,

< L
Reply to
Lionel D. Hummel (Hummel Automation)

The problem *I* encounter with "mechanized" parser generators is exactly one of resource utilization. Since SCPI distances itself from 488.1 "hardware", the price points of the types of devices that *could* want such an interface slips even lower. E.g., the ubiquitous UARTs (inexpensive) on so many MCU's almost *beg* for this type of use.

In the case where you have a small grammar *or* a grammar that you can carefully *create* (note this is not the case with SCPI), you can implement parsers that have deterministic behavior and very low resource utilization more readily by abandoning the automated tools approach.

For example, given "SY" (SCPI), you know:

- this must be followed by "ST",

- which might optionally be followed by "em",

- that anything else signals an error,

- and, that you don't need to hold onto *any* of these characters etc.

Similarly, when parsing a numeric, you know you can *discard* the digits *as* you receive them.

I.e., the "state" of the parser directly reflects the input encountered "thus far" which makes the input, itself, "redundant" -- along with the time (because you stalled the parser waiting for more input) and space it requires.

At least in the case of lex/yacc (flex/bison), they also are written assuming resources are plentiful. And, that they can increase their needs as they seem fit (malloc) Despite having COMPLETE KNOWLEDGE of the grammar, they are unable to predict worst case memory requirements that *they* will incur (granted, some grammars can have "unbounded" requirements). To make matters worse, the user of such tools needs intimate knowledge of how the tool works and all of the subtleties of "his" grammar in order to make this prediction -- or even fabricate a test case!

(i.e., any solution *you* are likely to come up with that allows the user to extend your grammar has to consider how the person doing that will address this issue -- "How do I know how much resources to set aside for my parser's use given that I am adding the following nouns/verbs to the interface?")

With "hand crafted" parsers, the writer is much more aware of how much memory is being used as well as *where*. Less magic involved.

But, avoiding these mechanical ways of developing parsers makes them harder to code, harder to extend and harder to test. I.e., even for a hand crafted parse of a simple numeric, you have to verify a wide variety of potential "numbers" to be sure the code doesn't suffer from boundary errors ("hmmm... that should have been 0> the environment(s) in which the code is intended to operate.

The problem I see with SCPI is that the grammar is big enough to discourage NON-automated solutions. And, the "options" that it tolerates (?) contraindicate any other sort of solution. I think it would be a much bigger undertaking to come up with the various parsers required for each instrument class, each set of "options", support for obsolescent/obsolete commands, etc. if you tried to do this in any way *other* than a mechanized parser generator.

But, my initial comments represent the flip side of that coin -- how do you trim the resource requirements that these approaches incur? Or, do you just say, "the price of admission is _______" and not worry about applications that can't meet that cost?

I.e., do you try to support "minimal conforming" devices as well as those with richer interfaces WITH THE SAME SOLUTION?

How does a potential adopter know if, for example, he can retrofit your implementation into his existing application (retaining *exactly* -- no more, no less -- the interface that said instrument currently supports)? I.e., how does he evaluate whether *your* implementation is worth adopting, going forward (unless he can evaluate it against his current implementation -- without having to actually *adopt* it!)

That speaks to the issue I mentioned above -- how does an "adopter" know the consequences of the changes/additions/etc. that he needs to make to the implementation? And, is any work that he opts to "contribute" back to the project run the risk of "missing the mark" (in terms of being created in a manner that is consistent with these other outlined goals)

I'd be interested in this as a *general* tool...

That was what started me down my own "evaluation path". Too many "things" to "talk to" (withOUT the benefit of a consistent, e.g., SCPI-ish interface) and too many opportunities to introduce bugs in "hand crafted" parsers. But, the cost of the automated approach was disturbing.

Agreed -- as long as that implementation scales well. *And*, as long as it is "relatively obvious" as to how one adds in those other aspects (as well as their own "customizations")

Reply to
D Yuniskis

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.