work, life

That is very funny coming from someone who boasted about futzing with the numbers and not needing to solve a quadratic equation in decades.

Most of the programmers that I know are extremely numerate with physics or mathematics degrees (or both). You cannot actually do proofs of correctness without a very high level mathematical skill to use the formal notations for doing computer software proofs like Z or VDM.

That is a part of the problem. Very few people think about computing problems in terms of axioms, preconditions, postconditions and invariants. If they did then things would probably be much better. (but you would have many fewer programmers and even fewer analysts)

There is also the very big problem that compared to hardware where no-one would even dream of designing their own CMOS 4000 chip from scratch or making a non-standard handmade bolt from a brass rod - software engineers do it every damn time because they usually don't know where to look for the right solution and there is no equivalent to buying in small components of extremely well tested code.

One company I worked for was too mean to license NAG so they spent ages bodging together a pathetic clueless DIY multivariate optimiser that could not find a fairly obvious global maximum in a month of Sundays. A decent simplex or conjugate gradients method solved it in seconds.

--
Regards, 
Martin Brown
Reply to
Martin Brown
Loading thread data ...

A program has many states and they are perfectly countable in any finite memory machine. It is just that 2^(8 x 2^32) ~ 10^(10^10)or

10^(10,000,000,000) if you prefer

Is a very very large number and most machine states are not useful ones.

Ignoring a few multiplicative factors for the internal machine state.

There is plenty of drive for improving software (and hardware) quality in academia and designing new languages that use the ability of fast computers to enumerate vast combinatorial permutations systematically to defend against the sort of common human errors we know get made.

Sequential logic programming is not particularly difficult. It is when there are multiple processes and time critical parts that things get seriously interesting. And again academia knows how to to it and tries to teach students the best ways but unfortunately industry does its best to make them unlearn any good practices they may have been taught.

--
Regards, 
Martin Brown
Reply to
Martin Brown

Solving quadratics is easy: just plug into the formula that everybody learned in junor high school. I just haven't had to do it in ages.

Do you know programmers who formally prove their programs to be correct? I mean, programs bigger than "Hello, world!" ? Adobe should hire them. Or Microsoft. Or Obamacare, with 500 million lines of code.

I can't tell if you are praising programmers or trashing them.

I work two blocks from Twitter and Square and I know several webby-apps "programmers", which is what most programmers are these days. We even donate space to one web startup, two guys back by the copiers. And I work with embedded C programmers and VHDL slingers. Few of the C types, and too few of the VHDL guys, understand 2's comp math, fractional notation, integration, trig, or filtering. Maybe keyboard-typing types, more verbal people, tend to program, whereas "drawing" types do hard engineering. I read somewhere that English majors tend to be good programmers, but they seldom take the Signals and Systems courses.

We recently did a module that does synchro/resolver acquisition and simulation. Lots of trig. The C programmer and the FPGA designer had to be told almost exactly what to add and multiply, and how to handle the exceptions. One interesting module was a "circular lowpass filter" to limit the noise/bandwidth of an angular position variable. We all had to think pretty hard about that one; mechanical analogies were a lot of help.

--

John Larkin                  Highland Technology Inc 
www.highlandtechnology.com   jlarkin at highlandtechnology dot com    

Precision electronic instrumentation 
Picosecond-resolution Digital Delay and Pulse generators 
Custom timing and laser controllers 
Photonics and fiberoptic TTL data links 
VME  analog, thermocouple, LVDT, synchro, tachometer 
Multichannel arbitrary waveform generators
Reply to
John Larkin

All possible values of the program counter multiplied by all possible values of any variable that can control program flow. Big number. A formal state machine would have one declared state word, and it would seldom have even 8 states. But the critical point is that every state would be named and accounted for.

Seems that any university that graduated true quality programmers would be world-famous, and their grads would be in enormous demand. Is there one?

I had some house guests, one of whom is the Dean of CS at a big university. I cooked them breakfast, including my famous Pancake Descending A Staircase. I asked her, the Dean, what sort of programming they taught these days. I got an icy response, "We don't teach programming." oops, sorree, wrong thing to ask a Computer Scientist.

See above.

--

John Larkin                  Highland Technology Inc 
www.highlandtechnology.com   jlarkin at highlandtechnology dot com    

Precision electronic instrumentation 
Picosecond-resolution Digital Delay and Pulse generators 
Custom timing and laser controllers 
Photonics and fiberoptic TTL data links 
VME  analog, thermocouple, LVDT, synchro, tachometer 
Multichannel arbitrary waveform generators
Reply to
John Larkin

Have you read Middlebrook's paper on solving quadratics?

Small differences of large numbers, versus numerical precision.

formatting link

--
"Design is the reverse of analysis" 
                   (R.D. Middlebrook)
Reply to
Fred Abse

Exactly. The problem comes from folks not knowing how the data types REALLY behave. Much easier to just blame the problem on "bad software" than "lack of understanding"! :>

Goldberg (?) has an excellent paper describing many of the murky details of, e.g., 754. A must read for anyone who uses numbers more than trivially.

Reply to
Don Y

Actually it isn't if you want to get the right answers numerically.

It would be fun to see how many bugs "Larkin bug free" code to solve the quadratic equation actually contains. My guess is at least two.

The statement of the problem is solve for x :

a.x^2 + b.x + c = 0

Where a, b, c are all real numbers. Use these variable names.

Even this "trivial" problem is not as easy as it seems... Powerbasic will be fine for this. It isn't a very long program.

I know (these days knew) someone who wrote one of the textbooks. It is used in anger where things really can't be allowed to go wrong like train signalling systems and ABS brakes. It is astonishingly hard work to obtain a formal proof of correctness for a non-trivial program.

If you can't tell then you really don't understand why software engineering isn't yet properly engineering at all. In all of the true engineering professions you can buy the logical equivalent of small standardised well characterised components off the shelf like Lego bricks that can be put together in any new combination you choose and will perform reliably according to their published spec (more or less).

It is related to your arrogant attitude towards programmers and the way that you think solving the quadratic equation that is only high school algebra is trivial and just a case of plugging numbers into the formula without any further thought. That is exactly *HOW* software bugs arise!

I have seen a related higher order polynomial inversion problem cause big problems on high resolution mass spectrometers. This isn't just a theoretical vulnerability it can and does go wrong in real life.

They may well be but that isn't real software engineering it is crude hacking. You can't even rely on every modern web browser to honour the syntax of web markup languages or run scripts reliably. Copious bodge arounds are common place. Non-standard MS extensions abound as well as intrinsic bugs :(

I have known several classicists who were extremely good programmers and also cryptographers. Musicians can also be rather good too.

I can't help you with your odd recruiting policies. I can agree with you that the standard of numeracy of students coming out of university today is considerably lower than when I graduated. Although they do learn some newer stuff about JPEG, MPEG, realtime DSP and the like.

Every now and then you run into a real gem with great potential so I reckon it is the courses that have become less rigorous but wider.

--
Regards, 
Martin Brown
Reply to
Martin Brown

I can't imagine an application that would have to solve a quadratic. We do do a fair amount of polynomial curve fitting, but that's cookbook stuff. It does go wild sometimes, as the quadratic thing can. It's interesting that, sometimes, an nth order curve fit will exactly and smoothly hit your points, and a higher order fit will also hit them but flail insanely between points. Haven't figured that out.

I've coded lots of square root and trig algorithms, a couple of floating-point packages, and a neat saturating S32.32 math package for the 68K, as useful as floats but a lot faster. I'm glad we buy ARMS with hardware float... writing fast FPPs is tedious. Floating point packages are difficult, notorious for bugs.

Occasionally numerical puzzles pop up, like picking oscillator frequencies and divisors to sample signals or something. That can involve relative primes and such. Brute-force search is usually better than number theory for things like that.

I did enjoy developing an algorithm for inverse convolution, one of the "ill-posed problems" of mathematics.

Given a system with actual impulse response A, and a desired impulse response D, find a filter such that

D = F * A

where * is convolution. This is "the deconvolution problem."

The usual approach is to complex-FFT D and A and divide in the frequency domain, then inverse transform. That has nasty problems, and a minor academic industry has developed around bandaging that algorithm.

Here's my inverse convolution, pitched towards evolving a FIR filter to clean up the step response of a fast-but-ugly TDR step. It was coded in PowerBasic.

formatting link

Fun to play with, especially when you crank some noise into things.

That's the kind of math I like, things related to physical signals and systems. It tends to be more numerical and algorithmic than symbolic. Right now I'm designing a transducer simulator, lots of FPGA+ARM (ZYNQ) based adds and muls and i/q vector rotators and filters, mostly 16 bit fractional math run at 50 MHz. Real-world, fun math.

This is cool:

formatting link

Lots of horsepower for signal crunching.

--

John Larkin                  Highland Technology Inc 
www.highlandtechnology.com   jlarkin at highlandtechnology dot com    

Precision electronic instrumentation 
Picosecond-resolution Digital Delay and Pulse generators 
Custom timing and laser controllers 
Photonics and fiberoptic TTL data links 
VME  analog, thermocouple, LVDT, synchro, tachometer 
Multichannel arbitrary waveform generators
Reply to
John Larkin

Unless powerbasic uses arbitrary precision arithmetic, there will be a set of cases where it *can't* produce actual solutions. That's the problem with all finite numeric representations -- of which floating point qualifies! The "naive" (junior high school) approach is guaranteed to "fall" for many of these pitfalls.

I use cubic beziers in my gesture recognizer. As such, solving quadratics is a real "runtime" requirement. As the code (in order to be robust) can't expect any particular constraints on the actual curves, it has to be able to handle pathological cases where the "junior high school" approach would give erroneous results.

*Understanding* the limitations of that approach -- and the data types available in the language -- is the only way to avoid those pitfalls.

The same problem applies elsewhere in "algebra".

Exactly. Software is called on to solve far more complex problems than hardware. And, in a much shorter time frame.

Weekend PIC problem: design a microwave oven controller. Hell, it's just a timer and a PWM with a really *slow* fundamental frequency. Different "power levels" are different duty cycles. Different "meal types" drag in different power levels for different times. Door interlock for safety (hey, that's REALLY SIMPLE! Do that in HARDWARE!). And, when not cooking, it can tell time!

How long (and how much money) would it cost to build a SIMPLE piece of hardware to provide exactly the same functionality? Heck, I'll even let you use a SOFTWARE DRIVEN DESIGN PROCESS and create it in verilog, etc. Let's also assume the fab time for the PCB is the same as an MCU approach (layout board, stuff components, solder)

*NOW*, how will you *prove* to me that this hardware works in all possible combinations correctly? (isn't that what is being asked of the software variant?)

Once you've sorted out how to make a hardware version of a "toy" product, then you can try building a hardware version of a web browser. Or, maybe a hardware version of a government web site!! (gobs of money to be made there! how many years do you estimate for the job?? :> )

Exactly. And, if they *don't*, you return them for a full credit and walk to the next vendor down the street and repeat the exercise. IT'S SOMEONE ELSE'S PROBLEM!

30+ years ago, I worked for a company that pushed the idea of "standard product" software. I.e., let's not keep reinventing the same pieces of code -- let's formalize algorithms and put part numbers on them and put them on the shelf like "components".

Siren's song. Too good to be true!

And, of course, it was! Hardly any project could afford to use a "stock" part. There was always some tweeking involved. The executable was too big, taken in concert with all the other standard products that needed to get shoehorned into the device. Or, the RAM requirements were excessive: "Do we really need

32b floats?"

And, you still had gobs of application-specific code -- in the days when your entire world fit in tens of KB!

Even today where resources are largely unconstrained, look at how many *different* functions are present in libraries -- each performing a different service.

Yeah, there might be thousands of Q's, D's, R's, etc. But, they are just different versions of the same basic thing (higher breakdown voltages, differing gains, etc.)

Never seen a cap tacked onto the back of a board? A foil lifted and component inserted?

The Market drives what schools produce. Who would want to go to a school ($$) and NOT be marketable when they got out?

Most employers want people who can sit down and be productive Day 1. That's why you see folks testing against specific skillsets (how would you do *this* in MSWord? how would you do that in Revit? and what about this-other-thing in MatLab?).

OTOH, I can recall (software) interviews where I was asked to code a particular algorithm and then spent the balance of the interview arguing the relative merits of *why* it was done one way and not another (given the constraints set forth by the interviewer). The goal isn't to see if I can write code but, rather, to see if I understand *why* a particular algorithm is appropriate or inappropriate to the pre-stated goal -- along with your particular "style" of approaching a problem (i.e., do you JUMP at the first suggestion that you are to write some code? or, do you question it's constraints BEFORE picking up the pencil? do you include provisions to test for exceptional input? hooks to facilitate testing later? etc.)

ANYONE can code. The joke at school was to watch the Math and Physics "majors"... get BS, discover no real jobs available; get MS, discover still no jobs; get PhD, discover the few jobs are held onto tenaciously by the folks who have them; get CS degree and find work the next day! :>

I've been told there is a TV commercial airing that basically says "Learn how to code" (WTF? TV???!) Focusing on coding -- when the smart money has to assume a paradigm shift is probably "due", soon seems shortsighted. I.e., whoever is sponsoring this (alleged) commercial has a short horizon in mind!

"Give us cheap programmers to meet today's needs. We'll ask for something DIFFERENT, tomorrow!"

Reply to
Don Y

o do a fair amount of polynomial curve fitting, but that's cookbook stuff. It does go wild sometimes, as the quadratic thing can. It's interesting tha t, sometimes, an nth order curve fit will exactly and smoothly hit your poi nts, and a higher order fit will also hit them but flail insanely between p oints. Haven't figured that out.

It's trivial. As a graduate student I used a non-linear least squares multi

-parameter curve-fitting program to fit working curves. Since it was the sa me program I'd developed to extract the parameters that characterised the e xperimental data I was collecting, it estimated the accuracy of the paramet er value extracted, allowing the other parameters to vary to maximise the f it of the 3-sigma error limit.

If I tried to fit too many parameters, the error bars became enormous. Ever ybody knows that trying to extract six parameters from five data elements i s nuts. It's equally clear that trying to extract too many parameters from noisy data is equally crazy, but the limits are more subtle.

hat.

This might have been worth posting if you knew much about number theory.

--
Bill Sloman, Sydney
Reply to
Bill Sloman

buggy

The willful ignorance of that assertion is a bad example to all others.

?-((

Reply to
josephkk

Hey, I'd love to have an assembler that supported typing and stuff. It would only be superficial, because obviously, you can do whatever the hell you want in assembly -- but if one keeps himself honest by remembering to use the macros and directives, one can at least solve a lot of problems before they ever show up, while not getting in the way of one's pristine coding.

MASM (M$' Macro ASM, for x86 obviously) supports a few things like this, which is great -- using the macros, you can define PROCs with autogenerated stack frames and calling conventions (for any of the common languages and methods), referring to operands by name. Or you can use simple control macros to test registers or memory (IF, DO..LOOP, etc.) so you don't have to write them all out longhand.

I think this is a perfect example of the language trying to help you, because yes, you have to remember not to trash BP before reading an operand, and if you forget, you've trashed it literally just as bad as a dangling pointer in C -- it's just as fragile, but instead of punishing your ignorance and lack of attention, it rewards you for using strategy and structure!

The average assembler, disappointingly, is typically limited to translating mnemonics, matching up line labels, initializing memory (you can set up a data structure, but it needn't have STRUCT functionality to use it with), and usually, filling in simple macros. The structure and macro-scripting power of something like MASM is often missing; I wish more went the extra mile.

Tim

--
Seven Transistor Labs 
Electrical Engineering Consultation 
Website: http://seventransistorlabs.com
Reply to
Tim Williams

rely

programmers.

I really doubt that. A real language has enough capability to do device drivers. There is no real language with a type system (strong of weak) that cannot be abused to write intrinsically dodgy code. See also Godel's Theorem.

Once again, this indicts programmers (and their managers), not the languages.

?-)

Reply to
josephkk

How

most

software.

/

software?

machines that

understand the

and the

That depends a LOT on the design (assuming there is one).

is no

software

No. Very few people can program. Lots of slobs can and do put a bunch of lines or macros together that sort of gets the job done. One of my current tasks at work is cleaning up just such a mess.

programming, a

Most

Great idea, go create it.

?-)))

Reply to
josephkk

programming, a

order. Most

NOOOOOOOOOOO!!

Understood. It is not all that helpful to the likes of you, and only modestly so for the likes of myself. But it helps doofuses get by, allowing them to do things that are otherwise moderately beyond them. Thus management likes it. (Dumber is cheaper taken well past the limits of validity)

?-))

Reply to
josephkk

I said "something like LabView", namely a state-oriented graphical language, that would constrain and discipline, and maybe even organize, the average programmer. We can't keep writing hairball c programs for the next hundred years.

--

John Larkin                  Highland Technology Inc 
www.highlandtechnology.com   jlarkin at highlandtechnology dot com    

Precision electronic instrumentation 
Picosecond-resolution Digital Delay and Pulse generators 
Custom timing and laser controllers 
Photonics and fiberoptic TTL data links 
VME  analog, thermocouple, LVDT, synchro, tachometer 
Multichannel arbitrary waveform generators
Reply to
John Larkin

I write programs that are driven by explicit state machines and flags. Very few programmers work like that, or even know how.

Since few people are good programmers, but many people program, the multitude need less dangerous tools.

Not in my lifetime. I know a guy who did such a thing, a graphical block language for programming FADECS, jet engine control computers. It outputs ADA source code. Seems to work pretty well, as jet engines are incredibly complex/stressed/reliable machines.

--

John Larkin                  Highland Technology Inc 
www.highlandtechnology.com   jlarkin at highlandtechnology dot com    

Precision electronic instrumentation 
Picosecond-resolution Digital Delay and Pulse generators 
Custom timing and laser controllers 
Photonics and fiberoptic TTL data links 
VME  analog, thermocouple, LVDT, synchro, tachometer 
Multichannel arbitrary waveform generators
Reply to
John Larkin

You can always write something that is *wrong* but the whole point of strongly typed languages is that you have to say very clearly exactly what you mean and maintain a clear distinction at all times between the sorts of things that lead to common hard to find bugs. I don't always agree with how this has been done in practice but the idea is sound.

These include a syntax to separate:

The *value* of a variable when converted into a new type.

The *interpretation* of the bit pattern that represents a variable of one type as some other type.

A means to detect attempts to use uninitialised variables. (and preferably at compile time)

Pragmas to enable and disable runtime testing like asserts, bounds checking, overflows, traps and exceptions.

Real world strongly type languages do provide a means to describe the hardware interface precisely.

The objective of strongly typed languages is that there is a much better chance that it compiles without errors it does something like to what you want. It forces the programmer to *think* about what they mean. It allows the compiler to spot very common human errors.

Classic example: If I had a pound for every time someone fed REAL*4 variables into REAL*8 NAGlib routines I would be very rich indeed.

Any strongly typed language can spot that. I think these days even FORTRAN can but in the old days of FTG1CLG they didn't at all.

The problem is intrinsic to the languages being used and a haphazard development process with ship it and be damned bonus driven management culture.

--
Regards, 
Martin Brown
Reply to
Martin Brown

You really don't grasp the intrinsic complexity of big software today. Your hardware is a toy by comparison.

Any of the top twenty in the UK would be good enough. I expect the same to be true of the USA. CalTech seems pretty good. But remember that the best graduates tend to go on and do research. The UK list is:

formatting link

Cambridge, original home of EDSAC still leads the field and now offers an undergraduate computing course (they used to be research only).

They teach Java in the first year and C/C++ in the second. Syllabus links show that they also include hardware development it is quite a wide ranging course spanning theory and practice.

formatting link

And *they* really are trying hard to sweep out the AUgean stables eg.

formatting link

Not all the universities do hardline academic CS either. Teesside University is in the worlds top twenty animation centres:

formatting link

And games programming is actually one area where intrusive bugs are exceedingly rare (though some new releases appear now to have performance problems as they have pushed the envelope too far).

She was treating your question with all the contempt it deserves. The purpose of universities is not to churn out trade school cannon fodder for industry but to teach people how to think. It is far more important to use the right algorithm and get a time O(NlogN) than to bodge along with no real understanding of the problem and be O(N^2) or worse.

It gets *VERY* important when N is 10^9 or higher.

You really don't have a clue.

--
Regards, 
Martin Brown
Reply to
Martin Brown

Your lack of imagination is not my concern. It arises amongst other things in some high precision ADC conversions where the next order systematic error after gain and offset is usually a quadratic droop.

The challenge remains you said it was trivial to solve a quadratic - now lets see your solution.

By that you appear to mean cookbook code for a polynomial fitting method you don't remotely understand how to use correctly. Very funny!!!

I sincerely hope that you are kidding here. It is a well known problem that overfitting data will get you a line that exactly hits every point and flails around wildly inbetween. You get exactly what you ask for!

People like you do it with monotonous regularity.

There are other more subtle problems with the poor condition number of the matrix that arises from solving a high order polynomial fit that Excel and Matlab get wrong and the fitter in Excel graphs got right. In the first release of XL2007 they broke the graphics version to make it agree with the defective algorithms used elsewhere. Sorted in SP1.

Depends what you are doing. Most of the time my embedded stuff has been scaled integer or fixed point.

It is called deconvolution and frankly if you don't understand why you get funny answers over fitting a polynomial you are most certainly not safe to even attempt to do a home brew solution for this!

You are a fantastic bloviator. You don't even know what you don't know!

Nobody does that in practice apart from dumb undergraduates - once.

It is a toy and you haven't the first clue what you are doing.

All the serious deconvolution codes today find a representative of the set of images which are consistent with the observed data and satisfy some additional heuristic quality constraint like smoothness or entropy.

Shame it is mainly used to trample over good data. BTW correcting for transient responses is as old as the hills dating back to analogue feed forward designs used in the earliest beam switching mass spectrometers to compensate for the non-ideal behaviour of 10^11 ohm resistors.

--
Regards, 
Martin Brown
Reply to
Martin Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.