Lack of bit field instructions in x86 instruction set because of patents ?

Maybe the electricity is cheaper than redesigning the kit? Maybe its the customer who pays for the electricity and the vendor who pays for the redesign? Maybe if that electricity came from a nice green, sustainable nuclear plant it wouldn't actually matter? Maybe it doesn't matter anyway?

Reply to
Ken Hagan
Loading thread data ...

And the good old HP 200LX I play with. 80186, with excellent battery life. And mine has a FORTRAN compiler to boot.

Cheers,

Steve N.

Reply to
Steve

Suspend *does* work at least on properly configured computers. There are a few rogue hardware designs that have typically USB peripherals or other drivers that do not handle suspend gracefully but that isn't the CPU's fault. Most of the problems reside in buggy 'Doze device drivers.

I have a year old Toshiba portable supplied with Vista that is barely usable - it regularly disables its own keyboard and its on-off switch (whatever setting of power save are used). Works OK on XP or even Win98SE so it is a Vista fault with too-clever-by-half hardware drivers.

Regards, Martin Brown

Reply to
Martin Brown

OK, I should have said, "the vast majority of cell phones/PDAs."

The 200LX was a pretty neat machine -- I once worked at a place where they were used as data collection terminals, and found them surprisingly usable. HP took the idea that Atari had come up with in their Portfolio and really made it usable (ok, they started with the 100, but even that was already a big improvement).

There sure were a lot of interesting, weird machines coming out back in the '80s... you just don't see that these days, now that everyone expects a PDA/PC/etc. to immediately have a full-featured web browser, e-mail, etc., so you're pretty much tied to an existing architecture and operating system and only get to make evolutionary changes to the platform.

I'm not sure if the equivalent on a 200LX today is something like a Nokia N810 (completely open, hackable Linux machine, modest-if-not-spectacular software provided by Nokia) or an iPod Touch (relatively closed machine, hacking strongly discouraged by Apple, but lots of pretty impressive software provided by them too).

---Joel

Reply to
Joel Koltner

I doubt it; if this information is available at all, it would almost certainly require an NDA.

But it's no secret that the power consumption for digital logic is dominated by energy-per-transition rather than quiescent current.

Reply to
Nobody

Please! The last one will be taxing enough.

formatting link

Bumper sticker: "Don't buy until he's gone"

Reply to
krw

:)))))))

Nobody knows anything but everybody has the invaluable opinion. That's the essense of the leftism - weenism.

That's true for 3.3V, it depends for 1.5V, and it is pretty much not true for below one volt high speed logic. Consider the fanout and the stray capacitance as well.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

I *know*. You won't listen. *You* are the essence of weenieism.

Absolute horseshit.

Reply to
krw

[snip]

Don't you just love all our resident "experts"?

But I notice our OP is "hotmail", so I would have never noticed, except for you feeding the troll ;-)

...Jim Thompson

--
| James E.Thompson, P.E.                           |    mens     |
| Analog Innovations, Inc.                         |     et      |
| Analog/Mixed-Signal ASIC\'s and Discrete Systems  |    manus    |
| Phoenix, Arizona  85048    Skype: Contacts Only  |             |
| Voice:(480)460-2350  Fax: Available upon request |  Brass Rat  |
| E-mail Icon at http://www.analog-innovations.com |    1962     |
             
     It\'s what you learn, after you know it all, that counts.
Reply to
Jim Thompson

Expert == "Has-been drip under pressure"

Vlad has never been.

Prince Vlad? Troll? No!?

Reply to
krw

Maybe not. Maybe, as someone said to me that he had observed very early in the game, the compiler is just too far from the action.

Zillions of transistors committed to OoO, branch prediction, speculative execution, etc. are plenty close to the action, but, now we have to worry about the fact that they eat power.

It was the failure of Dynamo-RIO and of Transmeta that puzzled me. Why is this so fundamental? Why is it either Terje (or equivalent), zillions of transistors burning watts, or live with it?

Surely it must be possible to have *something* scope what's actually happening and respond appropriately... or is it that the computational task is roughly the same as building a machine to pass the Turing test?

Robert.

Reply to
Robert Myers

That's why I said what I said. They forgot that the architecture is a protocol to be used by the compiler to communicate the semantics of the program to the hardware, and not a set of laws for the compiler to fit itself into.

I have posted in the past what I think of better approaches, and mostof the hardware people seem to agree that they would be easy to implement. They wouldn't be hard to compile, either - from the right sort of language (e.g. including Haskell, the better class of Fortran program, but definitely not C and C++)! Whether they are as effective as I think they might be is less clear.

Unrealistic? Perhaps. But what if you could get 10 times the performance for 1/4 the power consumption? Wouldn't that be worth a revolution?

Regards, Nick Maclaren.

Reply to
nmm1

That's the essence of any disciple, leftist or "right"-ist. "The Poobah says it, I believe it, that settles it!"

Unfortunately, they then vote for the poobah of their choice, tweedledumb or tweedleduh.

Sigh. Rich

Reply to
Richard The Dreaded Libertaria

How does stray capacitance increase the current drawn by a stable circuit? If anything, it's going to increase the energy required for transitions (I=C.dV/dt, increase C => increase I => increase I^2.R).

Reply to
Nobody

Vlad the Impaler doesn't have a clue, so he spouts nonsense.

...Jim Thompson

--
| James E.Thompson, P.E.                           |    mens     |
| Analog Innovations, Inc.                         |     et      |
| Analog/Mixed-Signal ASIC\'s and Discrete Systems  |    manus    |
| Phoenix, Arizona  85048    Skype: Contacts Only  |             |
| Voice:(480)460-2350  Fax: Available upon request |  Brass Rat  |
| E-mail Icon at http://www.analog-innovations.com |    1962     |
             
     It\'s what you learn, after you know it all, that counts.
Reply to
Jim Thompson

Returning back to the speculations on the power consumption of the cache. Cache performs the access to all cache lines at every read or write operation (and the tags, of course), so it is not the idle circuit.

dynamic losses = ~ F x C x U^2/2

Cache occupies large area, many transistors, long wires, many inputs and outputs, big capacitance, big transistors to drive heavy loads, high dynamic losses and high static losses as well. To me, it is not obvious how the power consumption of the cache compares to the other parts of the CPU.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

And what percentage of software in use today is written in a language other than C or C++ (or a language written on top of one of those)? How many professional programmers (i.e. not academics) are learning and using those languages? You'd have to have something amazingly revolutionary to throw away the collective knowledge of an entire industry.

Perhaps -- and there _are_ other architectures that have been successful in the embedded space, though none that provide the kind of benefits you propose. However, for the desktop/laptop/server market, your chip would have to be able to emulate a Wintel system and run existing software at least as fast as the best x86 chip, and aside from the Alpha's few months in the sun, nobody has managed to do that, and so we're all stuck with x86 -- and every year that condition persists, we lock ourselves in even more.

Worse, I'm not even sure that "10 times the performance for 1/4 the power consumption" is enough to motivate most people to switch; that's only a few years of Moore's Law -- probably less time than it'd take the industry to learn your new system, buy and deploy the machines, etc. Do you have a roadmap to how you'd continue to improve the performance of your chips after the first release? I remember that Itanic was pretty good when it was first designed, and a lot of companies considered switching, but by the time they had geared up to do so, x86 had pulled ahead again and Itanic was sinking in the same place it had been for two years... PPC had a better run, but it sill lost in the end because it couldn't keep up with the relentless pace of improvements in x86 chips.

S
--
Stephen Sprunk        "Stupid people surround themselves with smart
CCIE #3723           people.  Smart people surround themselves with
K5SSS          smart people who disagree with them."  --Isaac Jaffe
Reply to
Stephen Sprunk

I hope you aren't an engineer. So far you're doing as well as DimBulb.

Of course it's not obvious to you. You're dumb as a stump.

Reply to
krw

Didn't I imply that? :-)

But think of it the other ways round - no radical change will be accepted - no significant progress is possible without radical change - ergo?

There are three points there, to which I regretfully agree with the first :-(

And how does Moore's Law help? C and C++ are inherently serial - and, yes, I know about the current activities to add threading of a POSIX style. After 30 years of experience with that, we KNOW that (a) it doesn't scale and (b) virtually nobody can use it correctly except in the simplest cases. I am giving a seminar shortly where I point out that the CPUs of 2015 will be the same speed as ones of today, but with 32 cores. That's not just me saying that, as you know.

The whole point about such a radical redesign is that it would lead to a roadmap for scalable, usable parallelism. My assertion is that (today) any proposal for radical change that doesn't do BOTH of those is pointless.

But I don't expect to see that happen in any form before I retire!

Regards, Nick Maclaren.

Reply to
nmm1

Really: most of it, I suspect. Yes, C (and perhaps C++) is liked by embedded systems and operating systems people, and that's fair: that's what it's for. I think that you'll find that most of the web-deployed applications are being written in perl/php/python/ruby/maybe-java/ something-proprietary. All of the "Web2.0" applications are being written in the those plus a large dollop of JavaScript or ActionScript. Most of the in-house corporate one-offs are probably still being written in Visual Basic or Excel or some reporting layer on top of SQL.

Most exploratory numerical code is probably being written in Matlab or R or S, or IDL or the like. Maybe a few die-hard Fortran hands, and maybe a few bleading-edge NumPy/Fortress/whatever...

Yes, there's clearly quite a few folk beating GUI applications out of C+

+, but surely it can't take that much longer for them to realize that that's a losing game.

Why would that matter? Although a lot of the newer ones are being built on JVM or .NET or the like, and that's only tenuously C-related these days.

Most of the newer languages that could be interesting are incrementally different from the languages professional programmers already know. There's no need to throw everything away and start from scratch (although there are some interesting things to be learned by those who do.)

I suspect that by the time that the everyman-programmer really has no alternative than to change to a parallel model they will most probably already be programming in something other than C or C++ for completely different reasons (safety, productivity, where the cool libraries and toolkits are, etc.)

Cheers,

--
Andrew
Reply to
Andrew Reilly

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.