Best Scripting Language For Embedded Work?

Definitely on the host side only (Windows 7).

Thanks, DTA

Reply to
David T. Ashley
Loading thread data ...

Correct, Windows 7 only.

DTA

Reply to
David T. Ashley

For some types of embedded work (avionics, safety-related automotive) it makes sense to be as conservative on the build process as possible.

Because the scripting language interpreter can potentially affect the code generated for the target system, it needs to be approached with caution. (What I possibly didn't mention in the original post is that the scripts may generate portions of the target code automatically as part of the build process.)

The most conservative approach is a monolithic executable as the script interpreter and to guarantee by comparison or cryptographic hash that it is the expected one.

Saying "you should have Python X.X" isn't quite strong enough.

DTA

Reply to
David T. Ashley

I neglected to mention that the scripts may have a safety-related impact, as they generate or affect some of the target code. Hence the unusual bag of requirements, including the monolithic executable.

DTA

Reply to
David T. Ashley

David, the Tcl community offers freewrap, which enables assembling a bag of scripts into a single Windows executable.

formatting link

The "wrapped" files are in a weird ZVFS "file system" but the wrapped files can be copied to a temp directory for manipulation. The main executable does not need to be so copied; this would be for .dll files and the like.

--
Les Cargill
Reply to
Les Cargill

So I'd control the generated source as if it were ... source code.

--
Les Cargill
Reply to
Les Cargill

"Monolithic executable" gives you absolutely nothing in this respect, especially as you asked for a "monolithic executable" for the interpreter but separate script files.

Your compiler toolchain has a bigger effect on the target code, and it is most certainly not a "monolithic executable". Your program source code also has a big impact on the target code, and it is not a "monolithic executable".

Please go back to the /real/ requirements, such as the safeguards and checks you need, and work from there. Ask whoever came up with the "monolithic executable" requirement to step aside and let people who understand safety and reliability figure out what you need.

Reply to
David Brown

That's just nonsense.

If you had any safety and security requirements, your first step would be to drop Windows like a hot potato, and go for a system with stronger reliability and security.

There very little point in doing such checking on the interpreter unless you also check the rest of the system. The easiest way to deal with this is to have a minimal system with read-only filesystems - again, you are pointed clearly away from Windows and towards Linux.

If you want to do comparisons or hashes on files, you could also learn to do it on multiple files at once - it's not that much harder than on a single file. (tar the directory with the output piped into md5sum, for example.)

Even this is not much help - it is the /output/ of the scripts that you should be checking, and you can do that regardless of how it is generated.

Then pick precise versions of everything - clean install the OS from a DVD, with no updates (your safety/security system will not be connected to a risky network anyway) of any kind. For Windows, install the Python version that was specifically downloaded and tested.

I fully understand that you need to have tight control over your tools - and a bit tighter control than just saying "Python 2.6.x". I am a great believer in considering development tools to be part of a development project, and therefore archived and version controlled in the same way. But "the interpreter must be a monolithic executable" requirement is the silliest I have heard for a while.

Reply to
David Brown

For a brief time, right before some unrelated s**t hit the fan, I worked at a company that controlled their final build machines this tightly.

It was actually fairly easy: they used drawer-mounted hard drives, with the entire environment loaded and ready. Each product had its own build hard drive; when it came time to build an official release, or a release candidate for testing, the hard drive went in the drawer, the computer was switched on, the source tree was downloaded to a separate hard drive, and the build technician went to the appropriate directory and typed "make" (or, on some products, "make production").

It worked quite cleanly, with a couple of build technicians who were smart young women with no formal software training but loads of technical acuity and very responsible attitudes toward what they were doing.

Then our company prez was discovered cooking the account books, and the company nearly went up in smoke.

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com
Reply to
Tim Wescott

The computer was connected to a network while this golden hard drive was plugged into it like an ordinary drive? Sounds like a good way to catch a virus on the golden drive. Better to use hardware write protection.

Heh, the non-technical failures are still possible no matter what is done about the technical ones.

Reply to
Paul Rubin

I wasn't closely enough involved to know for sure -- but knowing the people involved, that situation was considered and probably dealt with one way or another.

Yup. We're all just humans here, trying to live our imperfect lives in an imperfect world.

--

Tim Wescott 
Wescott Design Services 
http://www.wescottdesign.com
Reply to
Tim Wescott

I am with David on this score. If the target is Safety Related then you need to think things through more carefully. You asked about a scripting environment that was a "monolithic executable". Do not be surprised when I suggest that Forth would certainly fulfil that requirement but depending on some of the other things you may need might not be the best fit. Forth is extensible and can become an Application Specific Language but that takes time and effort to construct a robust system.

As for the Safety Critical requirements in your target system, is that target system connected or just the recipient of the eventual compiled image? What means are you going to use to ensure the transfer mechanism is clean and free from defects or viral insurgence and Trojans? How good is your version control and change management system and is the development environment clean and free from viral insurgence and other Trojans. How secure is your development system from unauthorised alteration?

Those are just a few of the questions you need to answer before you can feel that your final target product is going to meet the Safety Requirements. A Windows system, I think, is less easy to ensure the cleanliness and security for. Even Linux systems have their security flaws but tends to be less vulnerable. Adding a decent firewall between your development conclave can help but total separation (physically and electronically) will help even more.

The final question I will pose is "does the target system require certification regarding is Safety Integrity Level or Performance Level?" If the answer to that is yes, then you should be able to prove the voracity of your certification with strong evidence to back it up.

--
******************************************************************** 
Paul E. Bennett............... 
Forth based HIDECS Consultancy 
Mob: +44 (0)7811-639972 
Tel: +44 (0)1235-510979 
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk.. 
********************************************************************
Reply to
Paul E. Bennett

Frankly, for safety related stuff Linux sucks about as bad as Windows. If safety is really an issue you have to use certified OS, a certified toolchain and a development process that meets certain standards. And you have the get it right from the get go. The least of your worries would be whether or not the scripting engine used for your build process is a monolithic executable (which will never be the case on Windows, as each Windows executable uses at the very least one Windows DLL).

Reply to
Dombo

This has actually been my practice for some time. I keep completely separate drives on the shelf and simply rack them in and boot. All customer projects get a separate disk drive where I load up a clean operating system that conforms to the one the customer will use (if it is a Windows product, for example, then the exact release they want testing done for is used... and if several, then I will create a set of virtual machines on that same drive for such additional testing.)

My reason was to isolate development for one customer from another. If a crash occurs, there is only one drive mounted and there is NO POSSIBLE WAY other customers are affected by the work I'm doing for someone else. I didn't even want the remote chance to exist that a serious problem happening while working on one project would impact anyone else. Disks are simply too cheap to not do this.

This also means that if a disk simply dies and there is no recovery of it, again it affects only one customer, as modified by my backup/retention practice.

The practice also provides some security, as I'm not using cloud based storage. (Some customers have actually insisted on it, too.)

So far, it's worked as well as I could have hoped.

Jon

Reply to
Jon Kirwan

Better still, get your build system set up the way you want it, then turn it into a live bootable DVD (of course, you want to do this with a minimal Linux system here - no Windows, and only the Linux programs that you /really/ need for the build/production). Then you can simply boot from DVD - if your source is not writeable, you don't need to fear it getting corrupted.

Reply to
David Brown

I understand what you mean about "certified OS" and "certified toolchain" - but the practicalities are that this will almost certainly not be possible. The best you can usually do is make sure you work with tools (OS and toolchains) that are as reliable as possible, that are well known and well tested, that are as small as you can get, tested as well as you can, then frozen. Even if the OS and toolchain are "certified", you still need to go through this same process.

Remember, early Windows NT once had very high security certification. The only problem was, the certification was only valid if the machine had no network, no communication by serial ports or other external ports, and no diskette station (this was before USB). You /were/ allowed to connect a keyboard, mouse, and screen - but you were probably not allowed to touch them without invalidating the certification.

Of course there are bugs in Linux, and there are security flaws - but I don't think anyone seriously views it as being "about as bad as Windows". And with Linux (or other *nix, including the BSD's, Solaris, AIX, etc.), you can improve the situation enormously by only including what you need. Your build machine will be completely unaffected by flaws in Firefox, or Java, or X - because it doesn't have them on the system.

But if you /can/ get certified software, it certainly won't make the situation any worse.

Reply to
David Brown

The same can be accomplished with virtual machines. I'm using separate VMs for separate projects, with networking disabled, but shared drives enabled to transfer the final build products to the host machine for distribution.

VMs had one plus in that the virtual disks can be made read only, or immutable, (in virtual box parlance,) meaning they can be written (if the tools require it) but will revert to the original state on power off.

On minus is that a (physical) disk failure can now take several VMs down, so careful backup procedures are essential.

--
Roberto Waltman 

[ Please reply to the group, 
  return address is invalid ]
Reply to
Roberto Waltman

I know projects where the interpreter is checked into the VCS along with all the code. That means the exact version is known, and a fresh checkout also gets the necessary tools to run the build.

But that's not a 'monolithic executable' requirement, it's a 'portable interpreter' requirement. It doesn't matter how many files the interpreter is, but that it'll work from a checkout on a random filesystem. In other words, doesn't need 'installing', registry settings, stuff in the Windows directory etc etc.

Theo

Reply to
Theo Markettos

No, the same cannot be accomplished that way. If software "goes wild" and at a deep level starts scribbling on a different drive, different partition, etc., then all is lost. I've had that happen exactly once. It was NOT GOOD.

The level of security achieved with physical isolation simply cannot be duplicated with virtual machines.

That said, I use VMs. But for a single project. So if a project requires tested support for WinXP and with several different service packs installed, support for Win7 in similar fashion, support for Vista, etc., then I will set up VMs to achieve that. Lots of them, if needed. All on the same drive. But that is for a single project.

That's simply not true. Protected mode protections can be bypassed (and are.) I'm not saying it's likely. But there exists code (for good reasons) that set up these protected regions and it's possible for it to crash in ring 0 and enter code with the wrong parameters, unchecked. Or a virus, I suppose. I take no such chances. The price of physical isolation is simply too cheap and so good, besides. I simply have nothing to worry about that way and it costs nearly nothing to provide. There is no excuse, as I see it.

I use a network server system for daily support, plus periodic IMAGE COPIES of disks. Again, disks are cheap.

Jon

Reply to
Jon Kirwan

Your fear seems to be that installing a script interpreter that has a bazillion files of library modules (like Perl and Python do), you become more dependant on these than you would become dependant on a single .exe file.

That's true, but even if you have a monolithic interpreter, nothing stops that one from being dependant on its environment. Think things like the console character set. Other locale settings like time formats. Heck, it could even be something crazy like address-space layout randomization causing hashtables come out in different orders today and tomorrow. Plus, there are no real monolithic executables: everyone needs kernel32.dll (or /lib/libc.so).

So it burns down to that you have to control the environment as good as possible. And if you have control over how the interpreter searches its libraries, there's no problem to use them.

Stefan

Reply to
Stefan Reuther

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.