Intel tri-gate finFETs

You are correct. I meant Unix.

Reply to
John KD5YI
Loading thread data ...

He forgets that he's the one wearing the ball gown and 6" heels.

--
You can't fix stupid. You can't even put a Band-Aid? on it, because it's
Teflon coated.
Reply to
Michael A. Terrell

This one is pretty good too...

formatting link

Reply to
The_Giant_Rat_of_Sumatra

There is hardly any of that now. Most big compute problems go cache bound very quickly so the problem ends up being keeping the CPUs fed with data. You have to be *very* careful when two different CPUs have locally cached the same public memory block make changes. Particularly so if it is an indivisible read-modify-write cycle for a semaphore.

All you would do in general computing is waste a lot of silicon.

Whilst I don't particularly like the SIMD instructions on the Intel CPUs they do work and also demonstrate that the real bottleneck is now getting large amounts of data to the CPU from main memory and back. The next thing that really needs sorting out is the set associative cache so that it doesn't slow down on lengths that are exact powers of two.

Even gamers offload most of the rendering to the dedicated graphics units which have parallelism of the type that you think would work in general. But it is in a specific role where parallelism works well and each CPU can be given a chunk of work to do in parallel and independently. They also run exceedingly hot when working flat out.

The most demanding thing an average home user is ever likely to do is transcode or rip an HD video whilst simultaneously streaming another. Once the hardware can do that without glitches there isn't really any point in providing even more horsepower.

And you think adding loads of unused and unusable silicon to CPU chips will improve this somehow?

Modern computer operating systems can be made secure. If you run Mac OS/X for example that is a much tougher nut to crack and hack than Mickeysoft's poxy offerings. Had IBM marketing not deliberately wrecked OS/2 the world would have been a much better place. That OS was already good enough to build drivers to virtualise RS232 with FIFO at the then maximum datarate of 38k4 baud using the basic hardware.

Now you are being gratuitously insulting to software engineers. Industry prefers to hack stuff out quickly and has a ship it and be damned policy because being first to market is worth a lot of money.

You can't do that with chips or board level hardware because the rework costs are insane. Software updates are but a download away with almost no cost to the supplier (apart from giving them a reputation for shipping stuff that is completely unserviceable until at least SP1). I don't approve of it at all but I am telling you how it is.

Excel2007 was a classic example of the ship it and be damned policy in action (it is pretty bad now even after its replacement XL2010 is out). Vista ended up with a worse reputation than it really deserved, but was stuffed by the fact that XP was a rather good vintage.

I suppose you never use SPICE simulations because it is software based. It is just so much better to bodge it all together in a big rats nest soldered up on the bench like a real he-man electronics engineer.

There are plenty of undocumented bugs in CPUs hardware but every effort is made to prevent them from being visible from the outside.

Regards, Martin Brown

Reply to
Martin Brown

Most of the current crop of PC CPUs are memory bandwidth limited almost as soon as they try to do anything big with data. Web browsing mostly spends its time waiting for you to read the content or new stuff to come down the line unless you are on a very fat pipe. Not every web browser is as slow, clunky and moribund as Internet Exploder or mail client as tedious and unsafe to use as Outlook Depressed.

Battery life has reached the point where you can just about get 8 hours out of some. Trains and planes now provide power too. Portability has improved a lot. My first Compaq portable was like lugging a car battery around and adding the FPU to it was like performing open heart surgery.

He has no idea ;-)

I am incidentally in strongly favour of software engineering for higher reliability than is common in the shrink wrap market, but I don't see how the transition can be made when the executives force ship dates.

Regards, Martin Brown

Reply to
Martin Brown

OS/2 died because Bill gates and MS backed out on their agreement to provide win32 full api and source.

No other reason.

They didn't really die then, either. They were in use in nearly every bank in the world for years afterward.

All the way up till Windows 2000. That was when banks switched over to MS, and OS/2 finally lost the rest of its base.

DesqViewX died for the same reason. MicroSoft reneged on their agreement to provide them will the full win32 api. They died abruptly as a result. But then, so did Qemm and other DOS OS add on memory managers, etc.

Reply to
StickThatInYourPipeAndSmokeIt

I've done a couple of recursive things, but only really simple stuff, like a permutation generator. It wasn't useful for anything because whatever actually _used_ the permutation had to be embedded smack dab in the middle of the deepest level.

But it was way fun to write! ;-)

I once wrote a solver for a toy called "Hi-Q" (AKA "Peg Solitaire") that could backtrack when it hit a dead end, but actually, I don't remember if that actually recursed or just kept track of the moves and levels.

Geektionary entry: Recursion: See recursion

Cheers! Rich

Reply to
Rich Grise

Well, yeah, if you don't write in a way to UNcurse when you hit a dead end. (i.e., backtrack.) But you have to know your elbow from a hole in the ground to be able to do that. ;-)

Cheers! Rich

Reply to
Rich Grise

Spoken like a true stone-age Neanderthal IBMer. Recursion was and is often done badly but the technique is valid for the right problems.

That isn't necessarily a problem these days as most good optimising compilers can automatically unwind one level of tail recursion.

Here is that canonical example of a simple function specification that is a lot easier to do with recursion than without. m,n positive integers

A(m,n) { if (m == 0) then return n+1 if (n == 0) then return A(m-1,1) return A(m-1, A(m, n-1)) }

Be very careful what parameters you try to evaluate it for.

All serious chess engines use recursion to search the game tree and have done since Turings first attempt- no other method even comes close.

Regards, Martin Brown

Reply to
Martin Brown

Not quite, AlwaysWrong.

formatting link

OS/2 is still used by Shell New Zealand Petroleum Stations as the main operating system.

OS/2 is still used to control the SkyTrain automated light rail system in Vancouver, Canada.

OS/2 is still used by The Co-operative Bank in the UK for its domestic call centre staff, as a bespoke program was created to access customer accounts which cannot easily be migrated to Windows.

OS/2 is still used by the Stop & Shop supermarket chain (and has been installed in new stores as recently as March 2010).

OS/2 is still used on ticket machines for Croydon Tramlink in outer-London (UK).

Reply to
JW

Can you pass the Turing test?

Reply to
Pomegranate Bastard

Untrue. OS/2's Presentation Manager predated the Windows API by a couple of years. The falling out was over the horrible kludges and contortions needed to maintain backwards compatibility with the 286.

What killed it from a corporate point of view was that they tried and failed to do an OS/2 with PS/2 launch with a hardware lock-in to their IBM proprietory architecture MCA bus. This galvanised all their competitors led by Compaq into clubbing together and making a better (E)ISA bus.

A lot of the politics on the IBM side was trying to prevent OS/2 on PCs being good enough to rob the S/3x minicomputers of their market share.

Also not true. You can still buy it if you wish but the name is new. There are live systems running OS/2 or it's descendants even today. The name morphed through Warp 3/4 to become eComStation.

formatting link

Not sure that I would buy into using it at this late stage though. Some of the high profile mission critical users are rather coy about saying publicaly that they use it these days.

The memory managers died a death because MS included a free one that was just about good enough for most people in with DOS 5 onwards.

Bounds Checker proved impossible for them to dislodge. They were even forced to use it to avoid the embarrassment of others highlighting memory leaks, invalid API parameters and dud pointers in core code. Still available and useful if you want to test and check 'Doze code.

formatting link

Regards, Martin Brown

Reply to
Martin Brown

Nice snip of the rest, you fucktard.

You pulling this name crap like the Larkin retard does makes you even less mature than the retarded bastard that started it. But then, I looked at the post's author. Yep... total retard.

Reply to
StickThatInYourPipeAndSmokeIt

Bullshit. So what they started 'first'?

They ALSO DID subsequently have an agreement with MS for the win32 api, and then MS backed out, and that is what was the death knell for OS/2. Period.

Reply to
StickThatInYourPipeAndSmokeIt

Bullshit. IBM's "PC Killer" was their AS400 flagship series.

It didn't perform that task, either.

Reply to
StickThatInYourPipeAndSmokeIt

Unusable silicon obviously does no good. Idling cores that are currently not needed does no harm. What would be good is absolutely provable and unbreakable system security. As far as I know, that has to be done in hardware. The easiest way to do it in hardware is to never run anything on the OS processor except the OS, and give that processor absolute protection and absolute control of all system resources. Who would rationally object to that?

Good, maybe even unbreakable, hardware protection is possible on a single-core system, but the Wintel morons can't manage it, and few people can do it absolutely right. As a technical/cultural issue, security is much easier to get on a multicore system done right. If the OS core manages resources but doesn't provide them, the OS will become more and more stable, and change less and less over time, even while the applications evolve. They can be developmentally decoupled.

Most big software is a mess. Maybe it has to be. But the OS shouldn't be a mess. Small-kernel OSs tend to be clean and reliable.

The vast majority of software is born full of bugs, zero chance of working as initially coded. It gets debugged to make it work at all, and when the more glaring bugs are mostly fixed, it's shipped. It's unlikely that any serious piece of software will be shipped without being compiled hundreds, or thousands, of times. Its exact state is determined by dozens, or maybe tens of thousands, of source files, compile scripts, linker scripts, DLLs, and various include files, all stored in some shared version control system that will probably not be reproducible a few years hence. And the build tools themselves are just as buggy and convoluted.

Imagine if we did FPGAs or PC boards like that; imagine shipping PCB etch revision 293. Imagine recalling a piece of hardware 150 times to fix mission-critical bugs. Imagine having no reliable way to determine the configuration of each of millions of presumed identical assemblies in the field, and having each in various ECO states.

I'm always impressed by how many high-dollar products are being shipped with firmware that can't be modified or rebuilt, because the sources are lost, or the tool chain can't be set up any more, or nobody remembers how to run it. I'm also impressed by how sensible it often is to throw away a block of code, and rewrite it from scratch, rather than trying to understand and fix what's there. The way we program now is like building skyscrapers out of popsicle sticks, without plans.

Yup. The "easier" it is to change things, the less effort will be put into getting it right. I see that in my own work: a PCB or shippable firmware gets checked, reviewed, re-read, and tested hard. And thourougly documented and formally released. If I write a PowerBasic program for internal use, I gravitate towards the type-and-test mode and sometimes never get around to formally releasing it. One simply has to fight to keep applying engineering discipline to squishy stuff.

I use Spice a fair amount. I'm now simulating a tapered transmission line as a pulse voltage step-up thing, to put a 5KV pulse into an electro-optical modulator. It probably won't work, but it is interesting. It's certainly something that wouldn't be practical to experiment with on a breadboard, unless you had years to lay out and test a few dozen iterations of multilayer PC boards.

I never Spice entire products, just little subcircuits. I do breadboard circuits for which there aren't adequate component models, or circuits that involve e/m things (like the speed of light and such) that Spice doesn't do.

LT Spice running on XP seems to be absolutely solid.

John

Reply to
John Larkin

It's solid here on Win7 as well.

John

Reply to
John KD5YI

During the last few decades, I have used recursion in only three programs that has actually used in production. In one case, the recursion was used in arithmetic expression handling in a compiler/assembler, some include file processing and in some binary tree processing.

In the first two cases, quite a lot of code was needed for checks against eternal recursion.

In the third case, much logic was needed to handle pathological cases with only the left or right side of the binary tree was occupied by data blocks.

In all cases, a lot of code was required the unwind the stack, if the input data was bad.

Recursion is nice on paper with nice and correct input.

However, if recursion is used with questionable (unchecked) input, you end up in bad problems.

Reply to
upsidedown

Recursion can be nice in arithemetic expression parsers, namely in interpreters and compilers. One can expect the depth to be moderate, and certainly it has to be checked. The error "expression too complex" is usually a parser stack overflow.

John

Reply to
John Larkin

Definitely.

For fun I once did a recursive sort routine, but I wouldn't code anything like that for real use.

Reply to
Spehro Pefhany

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.