Async Processors

Absolutely a BS logical deduction. By that same reasoning, air planes are totally useless because they are not used as frequently as bicyles, motorcycles, and cars. That earth movers and excavators are useless because their are not as many of those as there are shovels, power drills or screwdrivers. That ocean going oil tankers are not useful because they are a minority in oil transportation designs compared to rail tankers and semi-tankers.

If it's the right tool, device, design ... use it ... even if relatively rare in use globally.

The experts basically agree that Fuzzy logic, especially when used for control systems, is a different mathmatical frame work for representing probabliity based inputs into a system, and that it takes some pretty serious math to prove that the resulting system is stable -- not unlike a traditional control system design which has probability based control functions and inputs which are not continuous. There are also a number of trivial FL designs, which clearly can be implemented other ways, and in many cases probably should be for real world applications. There are also designs where traditional mathmatical models of the system simply are not practical to develop, where researchers find that FL systems combined with other AI approaches yeild usable solutions, which may or may not be as optimal as if you spent the time to actually discover a more formal description of the problem (if at all possible) and use traditional approaches.

I also read that series a couple years ago, and the nut of it was that he insulted a lot of people and expected them then to argue with him to debate the merits of the technology, and when they didn't used that as proof in the final article that he was right.

Which may well be partially true today, given the lack of tools and huge bias left over from 25 years of teaching people that async is bad. I did my VLSI class 20+ years ago using Carver Mead and Lynn Conway's book which also discussed async designs with the instructor absolutely bashing async design, as probably a few hundred thousand other students have been subjected to the sync only mantra. I did a self clocked high speed data separator for my project, which didn't go over well, but was also impossible as a clocked design for the techology of the day.

But you seem to argue more than just the point of fuzzy or async being unreasonable for many projects, you seem to also argue that they aren't reasonable for ANY project. Frequenty by dismissing the results of actual projects with conjecture that you somehow could do the same with sync design. And that really is asserting that you some how know better than those other researchers and engineers, even when you also state you do not have experience designing with these technologies.

BS.

You were given references which directly refute this ... which if you actually read, you are then implicitly claiming those references are somehow fraudulent, or otherwise incorrect, without providing proof in your unfounded claim above. If you did not read, and analyze, then you do not have the right to make this claim before doing so, and sucessfully refuting their work.

You were provided references for the arm project, which clearly state the operating power of the async design was 1/3 that of the clocked reference design in units of W/MHz, which you promptly ignore because the resulting target design was slower. This IS a valid measurement when power scales linearly with clock rate.

You were provided references, and more are available, where PL async tools where applied to existing sync designs, and significant power reduction resulted.

The sync reference design was the ARM968E-S described as "Smallest, lowest power ARM9E Family CPU to date (gate count = 80% of ARM966E-S)" by the developers, who obviously are very proud that they got the size and power down again on this iteration of their product. You are effectively stating that you can take their optimized sync design, and reduce it's power per megahertz by 65+% as the folks at Handshake Solutions did -- which implies that you consider the ARM.com team that did the ARM968E-S core design pretty close to clueless and incompetent NO? I would guess that the ARM.com team is far from incompetent when it comes to optimizing their design.

formatting link

The handshake solutions team then takes that netlist, and applies async design tools to it, and manages to reduce it's power significantly ...

formatting link

then dismiss with "Likewise, I have not seen any convincing evidence of lower power than you can achieve using sync designs if your goal is to reduce power consumption."

It seems, that you refuse to accept the evidence, and refuse to refute it, offering only the assertion that you can do better, and fail to even justify that with proof.

Reply to
fpga_toys
Loading thread data ...

fpga snipped-for-privacy@yahoo.com wrote: >From 130nW/MHz to 45nW/MHz ... a 65+% reduction in power. Which you

ops ... 130uW/MHz to 45uW/Mhz ...

Reply to
fpga_toys

hmmm ... need to slow down ... it was not stated they started with this netlist, just that they have an equiv design.

Reply to
fpga_toys

Hmm. Wasn't the whole point of the DRACO microprocessor that it was actually designed for, and used in, telecoms systems? The whole idea being that an asynchronous design was used because of the requirement for good EMC.

Why would such a design not also "make a lot of sense"?

Martin

Reply to
Martin Ellis

Proponents of async state that async designs automatically adjust performance and power consumption across the entire range of environmentals (temp, voltage, thermal gradients, etc) as long as the underlying hardware is still performing correctly. They state that this reduces hardware costs by allowing higher variances in temp, voltage, thermal gradient, etc while extending the operating range past what is achievable with traditional sync design at lower costs. This is clearly useful in a wide range of applications from hand held battery operated devices, to harsh industrial/automotive environments. By removing the global clock, they also obtain EMC benifits that are critical to many applications, which also lower shielding, packaging, filtering and other EMC mitigation costs. That tools like the Phased Logic conversion utilities, semiautomatically take existing designs, convert them to async, and achieve part or all of these benifits.

You state that YOU can do better (IE lower costs and improve performance for the same applications) by adding hardware and extensive testing to determine the correct margins so that the system can automatically adjust clock speeds and other factors to obtain the same extended operation ranges, at the same or better performance, and the same or lower closts. You fail to explain just how you can do this -- at the same or better performance, at a lower cost. You fail to offer a similar conversion tool which takes existing designs and lowers it's power, while extending it's operating range and reducing EMC requirements without additional hardware costs or allowing equivalent lower cost components, packaging, and other environmental controls.

Until you can enlighten the world about your claims, you are just insulting a lot of people which believe they are doing genuine state of the art research and product development to better man kind and our industry.

Reply to
fpga_toys

Is DRACO popular in commercial designs? Google only seems to find academic links.

There are better ways to "avoid" EMC, BTW, but they are off topic for this thread.

The point I was making is that most comms interfaces are fundamentally synchronous at their lowest levels. Consider a popular interface like

100Base-Tx Ethernet. This pushes out a byte of data every 80ns, and this timing must come from an accurate clock. Asynchronous logic has no place here.

Some parts (e.g. control plane in a Ethernet switch) are "best effort" rather than "exact timing" in nature, and asynchronous logic might be feasible. This type of function is often performed in a microprocessor.

Regards, Allan

Reply to
Allan Herriman

The external interface logic for comms, memory, and most things real world are clearly not async. However, placing an async to sync dual ported fifo at the chip pads, allows all internal control and data path logic to become async, with at most a small sync state machine for control.

Reply to
fpga_toys

Consider a store-and-forward Ethernet switch or router. A packet comes in on one synchronous interface, gets written to synchronous ram, gets read out of synchronous ram and sent to another synchronous interface. One could put sync-to-async and async-to-sync fifos in between those blocks, but why would you? Where is the benefit for a data plane application which requires accurate timing?

I can see that async logic might help in something like a microprocessor (where you often don't care about exact execution times), but the arguments don't seem compelling for any of the tasks I meet in my work.

Regards, Allan

Reply to
Allan Herriman

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.