Gui application for embedded system

Nothing much to it. The pipe works by patching the STDOUT from the first program into the STDIN of the second. As long as the programs work with the POSIX conventions of STDIN and STDOUT it's not language-centric at all.

Mel.

Reply to
Mel Wilson
Loading thread data ...

Yep, I know what a pipe is. I've been writing shell scripts for 30+ years. What I didn't understand was when you described Python as a programming language that is "used in pipes under BASH".

--
Grant Edwards               grant.b.edwards        Yow! I think I am an 
                                  at               overnight sensation right 
 Click to see the full signature
Reply to
Grant Edwards

True enough, although I'd still call that an implementation detail of "interpreted".

The most commonly used Python compiler

NO doubt, and VM are frequently good enough these days.

cat file | python myScript.py > zee

That's because an awful lot of software was written in 'C'.

No. My point is that it's not the shell, like bash.

--
Les Cargill
Reply to
Les Cargill

My meaning was that when Python is apparently used in scripting, it's not the scripting language; filters are written with it and then a shell is used to pipe things together.

--
Les Cargill
Reply to
Les Cargill

There is a saying - when you are in a hole, stop digging.

Python is used for an enormous range of tasks. That includes websites and web frameworks, general applications, scientific applications, games, test systems, embedded development tools, systems software, utilities, scripts, gui programs, cli programs, and pretty much anything else you can think of except small embedded systems (there are limited versions of python available for pretty small processors, but these are mostly experimental projects).

Python is a byte-compiled language, not an interpreted language (though the byte-compiler is built into the standard virtual machine). There are byte-compilers for Python to VM's other than the standard one (cpython), such as for .net or java VM's. There are JIT compilers such as pypy, which can result in speeds comparable to typical C code for some types of tasks (though usually Python code on a standard VM is significantly slower than compiled C).

Of the roughly 2500 non-link files in my /usr/bin directory, 200 are python files. These are not "filters" - in fact, I don't know of any that are filters. They cover a variety of tasks - one of which is the "yum" package manager for Fedora/Red Hat.

I know (from your many posts over the years) you are experienced and knowledgeable in many things in the programming world, but Python is apparently not something you are familiar with. You would be better off listening to experienced Python users like Grant than continuing to demonstrate your lack of experience here. Even better, you might like to try to learn Python yourself.

Reply to
David Brown

Of course - I believe I stipulated that up front.

I've tried it. Meh. All I can say is that it is better than Perl. I'm not trying to give Grant or anyone a rash here; at best I could hope somebody would provide a nice reason to consider investing more in the language other than "everybody's using it."

To clarify: - Tcl provides an event system by default. Python doesn't. There is a package. - Tcl provides event-driven I/O by default. Python doesn't. There is a package. - Freewrap/Teapot provide mechanisms for wrapping or packaging code for distribution without installing Tcl. Python doesn't.

--
Les Cargill
Reply to
Les Cargill

I'm not sure what you mean by this, but Python (at least on Linux) supports select and epoll, and there are some event driven i/o modules like asyncore or the fancier Twisted that have been around for a while. I haven't tried the new coroutine-based asyncio module but it looks interesting. I've always just use threads, whose dangers are overhyped if you know what you're doing and program conservatively.

For Windows there is py2exe.org, and InnoSetup if you want a fancy InstalShield-like experience. Most Linux systems have Python already available so there's less need for something like that, and people just use standard Linux packaging tools.

Reply to
Paul Rubin

Agreed. Since Ptyhon is doubtless built on 'C', I'd be surprised that select() and epoll() aren't supported for Windows.

Sure. I appear to be having some difficulty in explaining why I prefer that the interpreter itself be built around event-driven-ness but that's okay. It's always possible to use the dispatch pattern but I've gotten used to it being the default state of things.

I've been repetitive about this because I sometimes get to evangelize in real life about testing frameworks for embedded stuff. When the accusations of failure get to flying :) it's darned handy to have a test framework that demonstrates that the embedded side works, or one that exposes the defects that are there.

Threads are absolutely no problem but it's nicer to have options.

Yep; Windows is a pain with this issue.

One thing I've clarified with this thread ( thanks, guys ) is that Python is closer to a replacement for 'C' than other roles I was trying to hammer it into.

--
Les Cargill
Reply to
Les Cargill

The Python community seems to mostly hate threads and prefer Twisted, so I'm in a bit of a minority. I found Twisted confusing and too Java-like the one time I tried to use it. It has better performance than threads so I've figured I'd have to get familiar with it someday, but have avoided it so far and now I have hopes that the new asyncio library may make it unnecessary. There are also alternative languages like Erlang, Haskell, and Go, that have high-performance concurrency that looks like threads/processes to the user but are implement as async under the hood.

I'm pretty interested in this and would be happy to hear about any recommendations you might have.

Yeah, most of my stuff in the past decade or so has been in Python. When I write anything in C these days, as I've written elsewhere it feels like a "back to nature" experience.

Reply to
Paul Rubin

Huh. Well, I'd expect them to catch up. Thread aren't *bad*; thy're better when the system is run-to-completion, which isn't common these days.

Those are on the list too, but it's a real Tower of Babel out there :)

It's all good, really. The main thing will be what you're comfortable with and you won't know that until you try.

I've glommed onto Tcl because you just open a serial port or a socket and whale away. If you have, say, a USB device, I'll write 'C' to operate it and load up a pipe in Tcl to drive it.

This goes back 20 years to a hard-to-find bug on a relatively simple system where we actually built an exhaustive test jig that got run continuously for days every release. Put in instrumentation at every branch point and worked on the jig until we proved we executed every path. We generated all the permutations of events and made that into a "script" that would run repeatedly.

There are prepared frameworks but they can be tricky and they're targeted at bespoke test as a job function, but at developer testing.

This is harder for things that implement, say, layer 1 comms stuff. But those are often tractable by connecting to both ends and writing pattern generators, or setting up and tearing down cross connects. If you can buy COTS testers it's arguably better, but sometimes you can't.

For anything complicated, I end up writing C++ wrappers for freely available 'C' libraries, generally. This is domain-dependent, of course. This is purely to make the programs shorter; it's not very "OO". I did this for the leverage, not the methodology.

FFTW and libsndfile both got this treatment.

Since my day job is embedded, I don't use a whole lot of Python. Perhaps that will change.

--
Les Cargill
Reply to
Les Cargill

Python is great on the ARM/Linux class of embedded system. It's not usable for small MCU's and probably not so great for bare metal even with larger processors. I've heard it works ok with around 2 meg of ram, though I haven't personally run it in anything that small.

I see someone has uploaded Hedgehog Lisp to github:

formatting link

This is an embedded, functional Lisp dialect with offline compilation, whose runtime is around 20 kbytes and which (given a C compiler) should be able to run in bare metal and is supposedly ok with around 256K of ram. It uses a semispace GC and maybe the footprint could be decreased further by switching to mark-sweep. I've spent a while studying it and playing with it, though I never used it for anything "real".

Reply to
Paul Rubin

On that matter MicroPython [1] has been recently presented on comp.lang.python, it's a Python3 implementation aimed at microcontroller with main (only?) big difference from being the lack of Unicode string (a main point in Python 2/3 difference in strings handling).

I have no experience with it, I just read around about it and from what I can see it requires a 32bit MCU so the MSP430 is not suited for it. AFAICT the minimum memory requirement is as low as ~64 kB. Not sure how big the full standard library is.

It also works on regular desktop OSes as well so you can use the same environment on your regular computer and it's said it offers equal-or-better performance than CPython.

Just FYI, it may come in handy to someone.

[1]
formatting link
--
Andrea
Reply to
Andrea D'Amore

I guess that was true pre-micropython but the reference design for micropython runs an STM32F405RGT6 with 192 kB RAM which seems pretty interesting.

Reply to
Anssi Saari

I wouldn't say they "hate" threads, but there is a surprising amount fear surrounding threads and a lot of warning/moaning about how hard it is to get programs using threads to work right. I honestly don't understand where that comes from. I use Python's native threading features a lot, and I never have any problems. I _think_ the problem is that a lot of Python users come from a Windows/web-hacker background and have absolutely zero training on or experience with multi-tasking (or any other aspect of computer science or software engineering for that matter).

IMO, for anybody who has ever used multiple threads in an embedded system (and that includes interrupts) or with Posix threads, Python's threads are dead simple and nearly fool-proof.

I still write Linux kernel code in C and embedded stuff in C. User-space application code for Linux/Windows has been almost exclusively Python for 15+ years.

--
Grant Edwards               grant.b.edwards        Yow! UH-OH!!  We're out 
                                  at               of AUTOMOBILE PARTS and 
 Click to see the full signature
Reply to
Grant Edwards

(guessing here) It probably comes from issues related to serialization and locking.

Does the Python toolkit have "run to completion" in threads ( relative to other threads in the bytecode interpreter ) or do you have to do locking?

I agree; it's not difficult, but if Python is positioned as a "popular" language "they" may not have gotten to the part in the Dragon book about P and V operations.

That's extremely likely, I think.

They should be; I'm glad to hear that.

--
Les Cargill
Reply to
Les Cargill

As Les suggested already, it mostly is about serialization and locking. Studies have shown repeatedly that the majority of programmers will make a horrible mess of coordinating access to shared resources and, in particular, of algorithms which require multiple locks or which lock/unlock recursively.

Studies also show that, in the absence of GC, too many programmers can't keep track of who is responsible for deallocating objects that are shared or passed around.

This is the reason for the rise of "managed" environments which handle automatically deallocation and, at least, simple cases of object locking.

It's a wide-spread problem that has little to do with what language or operating system is in use. The skill level of the average desktop/server software "developer" now is only slightly better than "script kiddie". They are able to plug together library functions and routines scavenged from other sources, but largely are incapable themselves of coding those things in the first place.

I'm not referring to actually difficult things that require expert knowledge, but rather to really basic things like, e.g., writing code to search/modify a string or to manipulate a binary tree.

The fact is that the vast majority of "developers" now have neither formal education nor any training in the domain for which they are writing applications. Not infrequently, I see questions that really scare me. I think we've all seen things for which we've thought - and sometimes said publicly - "Hey! this task is way above your skill level."

And thread safety makes everything slower in the simple cases where it is not needed. 8-)

Understand that I'm not in any way opposed to leveraging a helpful programming environment ... but neither would I be helpless if that environment suddenly were taken away. Far too many programmers are completely dependent on a helpful environment and cannot function without it.

YMMV, George

Reply to
George Neuner

On 15.7.2014 ?. 21:50, George Neuner wrote: > ....

I believe this is key, perhaps understated. It is not only about people becoming dependent on having a few things to click on and expect a result; having to do more yourself and go deep down to the lowest level is important to keep the head of the programmer in shape. Not all the time, one does need the efficiency provided by level change, once a day or at least once every few days would be OK I guess. If I have been doing hardware and not programming for a few months it may take me weeks (typically 2) to become again the programmer I am used to think I am.

Dimiter

------------------------------------------------------ Dimiter Popoff, TGI

formatting link

------------------------------------------------------

formatting link

Reply to
Dimiter_Popoff

......

Can someone PLEASE explain to me, REALLY, the difference between a mutex and a binary semaphore? In the FreeRTOS implementation, the only difference I can see is the potential "priority elevation" thingie.

Also, the difference is in how it is used. A semaphore starts out unposted and blocks until a post event occurs. A mutex starts out "available," then some task takes the token (making it unavailable), does its thing, and then returns the token (making it available again).

So usage of a semaphore requires one access per event, while usage of a mutex requires two accesses per .

Is this right?

So is that it??!?

--
Randy Yates 
Digital Signal Labs 
 Click to see the full signature
Reply to
Randy Yates

Usually a mutex is in one of two states: either owned by some process (therefore locked), or else unlocked (so any proces can acquire it). In other words it's a one-bit value indicating whether you have ownership of a resource.

A semaphore on the other hand contains an integer rather than a bit, so it can keep track of how many of some replicated resource are in use:

formatting link

Semaphores (aka Dijkstra's P and V primitives) are among the oldest concurrency primitives. These days I usually think it's best to use asynchronous queues protected by locks (your system probably has a library offering those), then don't share any mutable data at all between processes/threads. You can take an efficiency hit from doing this, but you avoid a whole lot of traditional concurrency hazards.

Reply to
Paul Rubin

I've been using "message passing" via asynchronous mailboxes/queues since 1981. I always found it far easier to understand and debug and measure/manage than semaphores/mutexes.

I wouldn't worry about any reduced efficiency. Besides, as Dijkstra or Hoare once said after a programming competition's results were announced "If I had known that I didn't need to get the right answer, I could have made my code much faster".

Reply to
Tom Gardner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.