KiCad Spice, Anyone Tried It?

David - once you get people convinced about what modern compilers can do, you should really warp their minds and talk about profiling optimization.

I once spent an hour figuring out what the IBM Power compiler did with normal optimization followed by profiling optimization. It was only a couple of pages of source code we were looking at. The compiler was right - there was a logic error in the source.

Reply to
Dennis
Loading thread data ...

You snipped my question: what would you do to ensure that realtime things have the needed resources, beyond my oscilloscope test?

You sure hand-wave and insult a lot and avoid real issues.

--

John Larkin         Highland Technology, Inc 

Science teaches us to doubt. 

  Claude Bernard
Reply to
jlarkin

My experience with mind warping is that you need to leave some time afterwards, and hope they can see the effects themselves by playing around, before starting the next mind warp. Otherwise people think you are just fooling with them.

Plus, my experience with profiling is rather thin - it's more theory than practice. But I'd be happy to read of your experiences.

Compilers are not perfect, and I've filed (or helped file) a few "wrong code" bugs for compilers over the years (plus a number of missed optimisations, or feature requests - but those are obviously less important). But in the /huge/ majority of cases where I have helped people with "it worked while debugging but failed when optimised" problems, it has been the source code that was wrong. I can, in fact, only remember a single case of gcc optimising incorrectly (I've seen it a few times on other compilers).

Reply to
David Brown

And then point out that some systems do such optimisation "invisibly" and while the programme is executing. And try to get them to believe it is even possible.

For C, I like to point them to HP's Dynamo research experiment that unexpectedly became practical.

formatting link

The easiest way I've found to point out what is happening is to note that the conventional optimisation is based on what the compiler can see in the code plus /guesses/ as to what will happen at runtime.

Profile based optimisation is based on /measurements/ of what the code+data is /actually/ doing at runtime.

Reply to
Tom Gardner

If by that you mean "What would you do? Analyze the source code of Linux for a decade or two? Formally prove the system to be correct?" I've pointed it out a couple of times recently, and you have completely ignored it.

Hence I've given up on trying to enlighten you.

I *do* use the appropriate tools, and it takes a couple of seconds for the tools to tell me exactly how long it will take execute through each and every paths in the optimised compiler output. Hint: use xC, xCORE, and xTimeComposer from XMOS.

Try writing some software *using modern tools* - it is fun.

FFI see my response to you containing...

On 05/06/20 15:20, Tom Gardner wrote: > On 05/06/20 14:50, snipped-for-privacy@highlandsniptechnology.com wrote: > Some day we'll just run every process on its own CPU and avoid all > that context switching nonsense.

Some day? *I do /precisely/ that right now*.

It is *precisely* the concept behind and implementation of the XMOS xCORE processors, programmed in xC (think Occam or CSP, or some of the CSP concepts appearing in Rust and golang), within a standard IDE.

Reply to
Tom Gardner

All of Linux and its libraries and drivers and stuff?

You'd recompile Linux?

Even if you do, how will you calculate the worst-case task suspension? That's the issue here. It's trivial to figure out how much time *my* task runs.

--

John Larkin         Highland Technology, Inc 

Science teaches us to doubt. 

  Claude Bernard
Reply to
jlarkin

Yes. It's a well-known problem in Computer Science:

.

Joe Gwinn

Reply to
Joe Gwinn

I wouldn't use linux for hard realtime applications.

Neither would I use a screwdriver to insert a nail.

Typically that is no longer possible.

You can measure and hope, but that's all.

Reply to
Tom Gardner

Isn't that logical? There are thousands if not millions of users doing all manner of things with compilers. There's only one guy writing the code yo u are testing. Which is more likely to have undetected bugs?

In the early days when I had a 286 I recall finding bugs in the Microsoft l ibraries. Something rather fundamental like printf. They didn't want to b elieve I had found a bug and made me jump through hoops to distill it down to a very, very tiny test case that showed the bug. Then they told me it w as already reported. Back then they gave you a free copy of the tool when you reported a bug. :(

--

  Rick C. 

  - Get 1,000 miles of free Supercharging 
  - Tesla referral code - https://ts.la/richard11209
Reply to
Ricketty C

We do all the time. Works fine. The advantage of using Linux is that the MicroZed boots Linux out of the box, and it has all the USB and ethernet and such drivers, working, free. We can use the same software when we solder a Zynq directly to a board.

formatting link

formatting link

10-layer board, Zynq and dram and another FPGA, ethernet, usb, spinner knob, color LCD, all worked first try.

Do you design hard realtime systems? What do they do?

I can see it on an oscilloscope.

Measure and sell. That's what engineers do; we don't try to prove that everything we do is mathematically correct; that's not possible and if it were, it wouldn't be practical. We minimize risk and ship and people like it.

--

John Larkin         Highland Technology, Inc 

Science teaches us to doubt. 

  Claude Bernard
Reply to
jlarkin

I have designed many, including one that could kill people even when it functioned correctly to specification.

Reply to
Tom Gardner

A more accurate statement would have been "I wouldn't use linux for hard realtime applications except where it the processor was grossly underutilised".

In the Zynq the PL side does the hard realtime, the and the PS can be split into an RTOS on one core and linux on the other.

That gives considerable flexibility.

I've done not entirely dissimilar things on an xCORE processor, as I noted elsewhere.

I had the first version (the core function) working as expected within a day of the board arriving, and expanded that to include i/o over the next couple of weeks.

I wouldn't attempt to get something going in one big bang; iterative expansion is usually better. Exception: where the design was sent off to a foundry and returned a couple of months later.

Including that "invisible" intel SMI, or things like it?

I calculate, predict, measure, and ship.

Reply to
Tom Gardner

Lots of people use Red Hat Enterprise Linux (RHEL) for just that. The other, smaller Linux distributions are also used. The traditional RTOSes are being eclipsed. While VxWorks (the leading RTOS by dollar sales) is still important, it too is being displaced.

The key is that bit about "hard realtime". The problem with hard realtime is that in practice, it's fragile. In the old days, this was expressed by making those vaunted hard deadlines elastic, so the system wouldn't crash every time something happened to be a microsecond late.

The non-bandaid approach was to design the software to be soft realtime, but with queues and the like so everything would be correctly computed and in correct order, even if slightly late, and to provide at least 100% margin (computer no more than half occupied).

This kind of margin is also required to ensure predictable behavior despite the randomness of the real world.

One also uses inherently predictable algorithms, to control data-dependent runtime effects.

And, there is always an explicit overrun handler that follows a predefined policy when the overrun cannot be made up, and something must fall to the floor.

All these things are done for both Linux and the RTOSes.

In Radar, a standard approach when overloaded is that one slows down the rate at which radar pulses are sent out, as this controls the entire sending, receiving, and processing load.

Joe Gwinn

Reply to
Joe Gwinn

That's important to remember. Doing computer repair, it cropped up a few times, like when a new OS was incompatible with some (but not all) firmware versions on very popular line of CPUs. Customers knew their computers didn't work, Apple knew they did, in Cupertino (where all the firmware updates had been applied...).

When it got figured out, a lot of inventory had to be reworked.

I'm sorry to hear that abstraction paralyzes you. Get well, soon!

What does the "nuance" concept have to do with anything?

Reply to
whit3rd

The worst thing I ever saw was the Pascal compiler for the HP 64000 development system, based on a HP 16000 mini + in circuit emulator. The customer would not even consider anything without an ICE.

I wrote a software package for the beam steering of an ultrasonic array in Turbo Pascal on my Z80, remotely, and expected to port it to the HP64000 in two days. It was clean standard Pascal after all. I was off by nearly 12 dB. :-(

It is impossible that anybody had used that compiler before, including its authors. Some statements / functions had to be rewritten in asm86, no way to get them translated correctly.

cheers, Gerhard

Reply to
Gerhard Hoffmann

No they don't.

RHEL is used in servers and workstations, neither of which are /remotely/ hard real time. In server systems where minimal latency is required, such as high frequency stock exchange trading systems, Linux is certainly used - but not out-of-the-box RHEL. Very specialised and highly tuned systems are used.

On embedded systems, RHEL is nowhere to be seen. And those systems are "soft" real time, not "hard" real time. That means you normally expect things to happen within the specific time limits, but can accept occasional delays. So you can use it for a car's multimedia system - it's okay if there is an unexpected half-second delay in changing the radio channel or updating the navigation map. But you don't use it for the breaking system or engine control.

You are right that many traditional RTOS's are being squeezed out - by embedded Linux for the big systems, where "soft real time" is sufficient, and by free systems at the lower end.

That makes no sense. If something is a microsecond too late, it is not hard real time. Hard real time means you have designed the system so that the "hard" parts can't be delayed. This has /always/ meant ensuring that your hardware and software is fast enough that your tasks are completed long before their deadlines, so that it will still work correctly even in the worst case combination of tasks, but there is nothing "elastic" about it.

This sounds more like an adaptive system than a hard real time system - you have something that definitely is /not/ hard real time, because you can't guarantee you have the processing resources to handle everything in the time that you need. So you design a system that will handle this gracefully, making sure that everything gets done in a reasonable time, and nothing gets lost, even though it means slowing down operations. That's a good way to handle some tasks - but it is not hard real time.

Reply to
David Brown

Well, yes.

Sometimes the code you are writing (and the combination of flags you are using) is unusual enough, or merely unlucky enough, to hit a bug in the compiler. But it is a rare thing (with a decent quality compiler).

But if you look in compiler support mailing lists, forums, etc., help threads regularly start "There's a bug in the compiler - it breaks my code...". The "enlightened" posters write "Is there a bug in the compiler?" instead. It's amazing how often people start off assuming the compiler is buggy, rather than their own code.

Reply to
David Brown

The challenge with profile-based optimisation is that measurements unavoidably affect the thing they are measuring. You can find different balances between the extent of the interference and the accuracy of the measurements, and sometimes you can get good enough measurements that the final result (after the profile-based optimisation) is a win. And sometimes it can be used to find unknown serious bottlenecks. Other times, it is just too impractical for too little benefit.

If I /really/ want to worry the "compilers are too smart for me" crowd, I like to point out that compilers do optimisations even when they think optimisations are disabled.

Reply to
David Brown

Hard realtime is indeed the key phrase. But you have gone on to describe soft realtime, or perhaps firm realtime.

Soft realtime typically involves that the time specifications have a statistical element, e.g. mean delay realtime, but with queues and the like so everything would be

It is perfectly reasonable to have (short) queues in hard realtime systems.

The 50% load is an old rule-of-thumb that can be used as an early sizing sanity check in systems without caches and preferably interrupts.

Caches screw you in hard realtime systems since they increase the mean performance by a factor of 10, or more, but don't change the worst case performance.

All true, with the exception that such an "overrun handler" is "panic because I've failed".

Rate limiting is a standard technique in many fields.

If it can be used so that the overall system can meet hard realtime guarantees, fine.

I wouldn't have thought it was used in a radar controlling a Phalanx trying to deflect an incoming Exocet.

Reply to
Tom Gardner

Yes, the key is "Enterprise"!

The HFT mob take extreme measures, e.g.: - business rules encoded in VHDL/Verilog executing in FPGAs - buying all the microwave towers and linkd between Chicago and New York, because the speed of light in air is faster than the speed of light in fibres. Time is money. - laying transatlantic cables for their exclusive use

Reply to
Tom Gardner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.