Lack of bit field instructions in x86 instruction set because of patents ?

Except that you are posting to comp.arch, where the default meaning of the "asynchronous" is the restricted sense. If you want to communicate with an audience (as opposed to rant at them), you have to use their vocabulary.

Slightly off-topic problem: A batter hits the ball over the boundary. By what amount does the score change?

Reply to
Mayan Moudgill
Loading thread data ...

At work, I often put a glossary of the terms I am using in a document as the first item in the document. Synchronized and triggered fairly often get included in the list because of exactly the sort of different meanings we see at work here.

Synchronized or synchronous often means "at a fixed timing with respect to". This can include things that are timed from a PLL that is using the signal as input. The internal clock may cycle 10 times for each input clock cycle. So long as the timing is not slopping back and forth, the workings are synchronous.

The score doesn't change at all. You have to wait for the official who will set the direction of the score counter for the next clock edge. :>

Reply to
MooseFET

That's STILL a serious mistake! This thread was about the behaviour of SYSTEMS, including the larger hardware components and CPUs, and not about the LOGIC they use. The default (and well-defined) meaning you are referring to applies to logic (at the gate level and a few levels above) and nowhere else.

It is inappropriate and often incorrect to apply a term that has a well-defined meaning at one level to much higher level without at the very least considering whether it makes sense there. And the precise distinction between asynchronous and non-deterministic, which does make sense at the gate level, doesn't at much higher ones.

Also, it is NOT true that the term isn't used in the general sense in computer architecture. Process engineering is not the only thing that falls into that category, and the term asynchronous has been used in networking and parallelism for as long as I have been in this game (and probably longer).

You aren't alone in your confusion, and nor is it new. I have many, many times in my career tried to explain to people that the fact that a computer was built out of synchronously clocked logic with precisely determined effects did not mean that the effects of running identical computations would necessarily mean that events occurred in the same order every time.

Cheese.

Regards, Nick Maclaren.

Reply to
nmm1

What blather. It's no wonder that programming is the absolute worst, and most broken, of technical disciplines.

John

Reply to
John Larkin

Nick,

let it rest; if they are more interested in pushing you down there is nothing you can do about it unless they get experience in designing hardware circuits. It's HARD to get into the mindset of _how_ the hardware actually works if you have only software exprience. Knowing assembly is very dangerous as it gives the false impression that you know HOW things works internally.

Hardware does not work serially. It is made up from many, many functional units which all work at the same time. They give the false impression that they work serially, because the interface is serial: one instruction goes in, result comes out. Right? Wrong.

There is no clock frequency. It is a mirage. All the chips operate at the speed of light. Electricity goes throgh very tiny tiny transistors and the flow from various inputs is controlled by the very same invention. Larger scale logic is built using these tiny things.. these things make more complicated operations.. these are logic libraries. Different processes are used to make these millions and billions of transistors.. these processes have their own logic libraries.. higher level tools are used to express logic. The higher level presentation is *different* for different process and libs.. some libs are totally shit, some are better.

Logic circuit does not need to be clocked, there is a lot of logic done w/o anykind of clock. The circuit knows the result is ready by reading a line, if there is current, the logic circuit has completed one iteration. This kind of logic circuit often takes less power.. but at SOME level you have to have a clock driving the interface, so that it is more convenient to talk to outside of the logic. To connect different blocks together, at least.

There are different classes of logic circuits.. clocked, clockless.. synchronous.. asynchronous.. the interface is clocked and serial for CPU's because it is very difficult (understatement) to atomize I/O (among other things) without a clock.

My formal education is in electronics field, but I work writing microcode and drivers.. and I got into the current job as result of a very long chain of events which isn't very interesting story to share. The short version: I gave programming a go when I was 11 or so. It was fun. I did graphics. Too slow. Use assembly. z80. m68k. mips. x86. school. arm. pascal. c. c++. ocaml. perl. higher end higher level. working on graphics. rasterizers. scene graphics. directx. glide. opengl. games. graphics. consulting. hired by semiconductor firm. been there ever since. happy end?

The point being, I climb the tree by my ass first.. the software/ assembly mindset is SO WRONG when thinking hardware. You need both if you're architect; you need to be able to design good hardware and part of good is that you can write a good driver for it. That's why there are a lot of software and hardware engineers. senior. staff. you name it, but fewer architects.

Good night.

Reply to
hanukas

I couldn't come up with an adequate reply to this twaddle, so I decided that the following would be just as comprehensible, but way more informed:

'Twas brillig, and the slithy toves Did gyre and gimble in the wabe; All mimsy were the borogoves, And the mome raths outgrabe.

Reply to
Mayan Moudgill

That would be "six and out". (Backyard cricket rules.)

BTW. I agree with nick. Just because two machines might run from clocks has no bearing on whether or not they are (or can) operate synchronously in any useful sense. That's why "synchronization" is such an effort.

BTW part two: common asynchronous domains in (small) computer systems: rs-232 communications, ethernet packet arrival, keyboard or mouse events, hard-disk read re-tries, file server contention, etc...

Cheers,

--
Andrew
Reply to
Andrew Reilly

What do *I* do? I do what the specification asks. The bit bangers here should start by learning something about hardware.

What do you want to do? If lockstep is your intention (as was being discussed), then both CPUs get the same information so had *BETTER* have the same answer. If not, pull the plug on the whole mess. It's broke.

Reply to
krw
+--------------- | BTW part two: common asynchronous domains in (small) computer systems: | rs-232 communications, ethernet packet arrival, keyboard or mouse events, | hard-disk read re-tries, file server contention, etc... +---------------

And don't forget I2C/SMBus, where either the master or slave or both can slow down the clock (SCL) by a variable amount. Yes, it's "synchronous" [in the sense that the set of permitted transitions of SCL & SDA is completely defined, with setup & hold times], but the edges can move rather widely & arbitrarily [within limits], so it has to be re-sync'd to the logic clock within each controller. And SMBus used for "Smart Batteries" and "Smart Chargers" [read: every laptop currently made!!] is multi-master, which means collision detection/resolution [though, unlike Ethernet, the "winner" never knows he participated in a collision!].

-Rob

----- Rob Warnock

627 26th Avenue San Mateo, CA 94403 (650)572-2607
Reply to
Rob Warnock

Or just look at the entropy collection code in any good /dev/random implementation I suspect.

G.

Reply to
Gavin Scott

How convenient. =3D)

Reply to
hanukas

Ah. Sorry, I thought that you were a professional. Many of us who post to comp.arch are the sort of people who design systems and write specifications. I was asking what you would do if you were designing a specification.

Not at all, as you would know if you knew a bit more about such systems. What needs to be done when one gets a recoverable error is that the others wait while it retries. When the error is not recoverable, they use whatever majority voting and decision strategy the designers favour.

The effect is that the results are available asynchronously to the voting stage, in the case when one or more processor has to retry.

Regards, Nick Maclaren.

Reply to
nmm1

I am, just not a "professional" script kiddie, as you clearly are. I do hardware for a living, not scripts.

Many of you on CA are pompous asses, as you are clearly showing here on SED.

That's simple. Gather requirements, something you script kiddies think totally unnecessary.

*ALL*

You're full of shit. I *KNOW* about such systems. Clearly you haven't a clue about the subject.

Clueless. You can repeat such nonsense all you want. That is *NOT* lockstep. Lockstep is a cycle-for-cycle hardware locking. Both units (at whatever granularity locksteping is done) *must* have exactly the same information at the same time.

Clueless.

Reply to
krw

Finally I met the professor who designs superscalar MIPS cores. One of his previous works was about identifying the hot spots in the CPU. This is what he told me:

  1. Caches are definitely the coolest areas of the CPU; one can clearly see that on the IR pictures. It is hard to tell about the other parts; it depends; especially as the modern CPUs turn off the areas which are not in use.
  2. Pure asynchronous logic is not used. However there are some logic blocks where the result of the operation is latched on the second clock; those blocks are used in multipliers, dividers and like. It seems like nobody got anything practical from the async logic with the processing delay of more then two clocks.
  3. Static vs dynamic power consumption - it depends. On the sub-volt high speed logic, static and dynamic losses are comparable.

Vladimir Vassilevsky DSP and Mixed Signal Design Consultant

formatting link

Reply to
Vladimir Vassilevsky

So? Second hand knowledge, at best, passed down to someone who doesn't understand.

Of *course* they are. Every FF in a processor is clocked every cycle. Only the cache line being accessed is active. Of course it's going to be cooler. The only power dissipation in the inactive cells is caused by leakage (which is not insignificant but obviously less than the active power).

Baloney. They are clock gated, they are *not* "turned off", unless the entire pipeline is dormant for some time. Entire CPUs are powered off when not in use.

Define "asynchronous logic". Domino logic certainly is used. The rest of this is BS.

Absolute nonsense.

Reply to
krw

I did that yesterday. (OK, so it was a mega-hyper-super-VGA card, but running in plain old CGA mode.)

-hpa

Reply to
H. Peter Anvin

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.