for all those who believe in ASICs....

Actually the point was Ford not even checking, and shipping it AS-IS, a little stronger point that Xilinx partially testing.

Repair of defects is completely a different topic.

Tell the stock holders that when they put sellable "bad" dice in the trash can, and ship good dice which were not fully tested for a heavy discount, that CAN drop below the "cost" of a good die. There are two lost profits in this business plan ... shipping good die at less than cost, and trashing usable die that would qualify under an easypath shipment. The sum of these two failures, is easily worth a significant fraction of Xilinx's current revenue, and the management finders fee (AKA bounus) for correcting this is probably more than most people will retire with.

My finders fee is a lot less .... a briefcase full of discarded XC4VLX200's which currently don't have any value and they probably have to pay to get destroyed and hauled off.

Reply to
fpga_toys
Loading thread data ...

Ouch ... talk about getting baited :(

Nope ... I don't even know an Altera person. I do know or have met several Xilinx people who have always been great people to deal with. Every time I've been against the wall, there's been a Xilinx FAE to help. Field staff always seem to have great customer skills.

Reply to
fpga_toys

Did you rail on Xilinx and their trashing young developers when the FAE was there to help? Did you declare their datasheets a piece of crap because they don't provide what you believe to be "proper" power data? Did you call their business methods a scam?

Geeze, man - listen to yourself. Ask the FAEs that have been so helpful to look at your threads and see if they'll side with you or try to help educate you in the other aspects of Xilinx that are there to benefit the customer base.

The written word is often a poor way to communicate for those who don't have a solid understanding of professional interaction. Often all it takes is a good conversation - face to face - to help the understanding come through. If you're a customer in need and you ask for help from a respectable company, you get courteous assistance. On this newsgroup you haven't been a customer in need, you've been an agitator - specifically in this tired thread you've been an underinformed "devil's advocate."

You help noone.

Reply to
John_H

Actually I had two large proposals out with one of them pressing me like hell about the power design, and the FAE could not get me an answer, and the potential customer gave up. I got drilled about several other items the data sheet could not answer either. That was a useful exercise in learning that the info just isn't available.

And when was that?

If Austin and Peter want to be confrontational .... it's their choice. I've tried to cool that. I'm also not going to back down and let them ridicule me every thing they disagree without a cost to them. I'm just a few years younger, have spent my life in other engineering areas, and I do understand my business areas. There are some very clear differences between our perspectives, which are not based on absolute rights and wrongs, and would normally be matters to agree to disagree on. If their intent is to destroy my credibility because I'm not an insider, and I've a different view point they don't like ... then I will reluctantly play that game as well until they get tired of being burned too - or Xilinx management steps in and pulls the plug. If they can act responsibly, so will I. But I will continue to push for things that are needed for a pure reconfigurable computing market place, a niche market that is growing, and they clearly have mixed interest in.

It was fully amusing to watch last week go from Austin and the other poster say that RC and PR were money sinks and being dropped, and right after I pressed to open source it, another Xilinx guy steps in and says wait a year and they will finally get it fixed in the next major release. That is still a tiled PR solution using PAR, which is just to slow and requires far to my floor planning for my market. With Austin asserting JBits is dead, that kills the alternate strategies of using the backend tools from JHDL or JHDLBits into Jbits. Xilinx has very mixed internal positions on that whole tool set. I had been told that I would never be able to use JHDLBits, Then Austin pops in and trys to change that. Then the next week is declaring JBits a failure and dead. Then another source tells me Austin doesn't speak for the JBits team, that the JBits isn't dead.

That's a two way street. Yes I've pressed hard, but the personal attacks in reponse were never justified. Frankly, based on a business that stands behind Austin and Peter, I've considered not ever doing business with Xilinx if that level of utter arrogance is to be expected. I have about 4K Xilinx parts in my inventory that I can dump, and never deal with the company again. Or, I can do as I've planned for two years, and that is build a new company around reconfigurable computing boards. I've pressed that the current ISE software model with very poor place and route for compile, load and go operations just doesn't fit that market. It was designed to do an excellent job at all costs, not a very good job quickly. It's ability to handle dynamic reconfiguration has been marginal and error prone. After talking with several people that had gone down that path, the suggest was to roll my own based on the jbits and JHDL code. The legal issues with that are less than clear. Nor do the high ISE per seat license costs work trying to sell FPGA's as a very fast computer accellerator.

That Xilinx is a bit thin skinned about criticism, and constructive criticism, is a bit of an understatement from my perspective. I do know that when my FAE can not provide worst case power numbers, and I'm being pressed hard for them, there are problems. The customer had already had the same discussion and lack of results from a prior proposal and was WAY ahead of me. There are also problems when customer interfaces are not trained to listen to the customers needs, and instead jump in and argue why the customers are wrong. There is a lot of truth that the customer understands their business, and it's the vendors job to understand that the customer probably isn't just wrong about their business needs. In tech land, that concept that the customer is always right, needs some serious refinement. Sure customers get it wrong, and need guidance, but they are generally very clueful about what they need for their business.

In talking with others I've gotten similar mixed feelings about Altera, but no first hand experience yet.

I've actually interacted with a fair number of people with radically different perspectives. The problem in a nut shell is that RC isn't taken seriously by Xilinx, as it's been a 15 year pipe dream. Their tools and business model are for a different market place -- high volume embedded. And there staff are used to telling customers how to use Xilinx product, and have some serious problems when you step outside the high volume embedded application areas. First of all, the biggest sales get the support. And as we have clearly seen, niche markets, get little and are quickly subjected to being dropped, to go chase another large customer. Small customers either need a way to fit in and pickup the crumbs, or go to the seven dwarfs as Austin puts it. IE send the small customers to the small players.

Given this has been the status quo for some decade .... clearly things are not likely to change without a shove from my perspective. I'm more than willing to step up and push for change, rather than watch the opportunities slip by. I don't think watching the chances slip by another decade is the right choice. When it comes to Xilinx and RC, either they need to embrace it, and clearly get behind it, or step aside. Their indecison is hurting the market place seriously. Other than a few ruffled feathers the last few weeks have been very useful in airing differences in market requirements. The side emails I've gotten have been supportive in general.

So I leave you with this challenge ... layout a road map that will either effect the required changes, or get a clear decision from Xilinx management they do not want to be a major player in the RC market - firm decision inside 3-6 months.

I'm avocating being vocal, direct, and a bit of a squeeky wheel, as the passive approach has created 15 years of indicision that we see even in the last few weeks with radically different views from several different Xilinx spokes persons. I'm willing to actively and intensely engage Austin, Peter, and other Xilinx staff on all the related issues to fully air the differences in opionion about the divergent needs of the various markets. So far, the intense, and informative debate here has actually been very useful to provoke discussion that would normally just be ignored.

Austin and I differ on the impact that patent expirations will have, but history clearly relates that the expiration of base patents in other technology areas was followed by a rapid change in the guard as off shore companies stepped in and took over the market globally leafing the US market founders dino's. In the next four years all the major patents that control XC2000, XC3000, and much of XC4000 technology expire ... which means off shore companies will be free to market bigger and faster version of those product technologies. They will not be Virtext-II Pro's or XC4V's, but they will be big, fast, and cheap FPGAs. And five years after that, about a decade from now, the landscape may well be very very different in who are market leaders.

Fairly major revenue choices, like the Zero Defect is Quality perspective the prevents Xilinx from wringing maxium revenue from every wafer are very strong indicators that Xilinx may not be nimble enough to adapt to a comodity FPGA market place price pressures that will force severe cuts in the margins they have held for years. The layoffs, the market restructuring, sweeping changes in management teams could easily send Xilinx to it's grave inside as little as a few years

- or leave it a minority low volume player for a long lingering death or takeover/buyout target for the IP.

I can be vocal, and raise the issues. Or I can shut my trap and watch :)

engage the debate .... make up your mind .... and if the changes come true as I suspect, at least everyone had their day to plan ahead and not cry over the changes. Austin and Peter are likely to retire before long, so it will not be their watch on duty if the market loss happens .... but it will be their direction and attitudes that set the stage for it.

Reply to
fpga_toys

Sounds OK, at first glance.

But Disk drives have an inherent storage for defect maps, and LCD screens rather 'self document' any faulty pixels.

So, how to actually do that, in a RAM based FPGA ?

You don't REALLY want to do what the Russians used to, and ship a errata per device ?!

I think Altera have a method for re-mapping defective areas, so they can make real yields higher. Not sure about Xilinx, or others ? Xilinx did have a patent swap, after they both finally tired of feeding the lawyers, but it takes years for that to work into silicon.

So, that means the Tools have to be defect-literate, and be able to read a device ID and then find that device's defect map ?

I suppose that is do-able, but it does not sound cheap, and the SW teams are struggling with quality now, do we really want them distracted with defect mapping ?

How long can you tolerate running a Place/Route, for just one device ?

Another minus to this idea, is that of counterfeit devices. How can Xilinx prevent the defect devices, entering a grey market, sold as fully functional devices ? Sounds like a Device ID again...

Problem is, device ID is not in any present Xilinx silicon ?

Others are looking at this (IIRC Actel use something like this, to 'lock' their ARM cores, to Silicon that includes the license fees? )

There might be long-term potential, for some FPGA vendor to make their Tools and Silicon Defect-map-smart, but the P&R would have to be way faster than present - and anyway, why not just fix it in silicon, with some redundancy and fuse re-mapping ?.

Seems only a tiny portion of users could tolerate the custom P&R's ?

-jg

Reply to
Jim Granville

Retire?

Wow.

That is a very strange thought.

Both Peter and I are "retire-adverse."

We are having far too much fun watching and helping Xilinx grow.

And I think I am more than young enough to be Peter's child, not his peer.

Amusing post,

Austin

Reply to
austin

Having a unique serial number for identification might be nice, but is certainly not necessary to apply defect mapping to a particular well known FPGA device. Two likely environments exist ... the fpga device, or devices are mounted on a pci card and installed in a traditional system. The installation process for that card would run extensive screening diagnostics, and develop and error map for it. The driver for that device, interfaced to the tool chain would make the map available as a well known service. In addition the device/card would be sold with either media or internet access to the more accurate testing done prior to sale by the mfg.

The other likely RC environment, are FPGA centric processor clusters, built arround a mix of pure logic FPGA's (like XC4VLX200's) coupled with cpu core FPGAs (II-Pro and SX parts) possibly coupled to 32/64bit traditional risc processors. These have been my research for the last 5 years. These super computers would be targeting extereme performance for mostly high end simulation and processing applications traditionally found doing nuke simulations, heat/stress sim's, weather sims, genetic sim and searches, and other specialty applications. Machines doing this in various degress exist today in both research and production environments. The software for controlling these machines and ground up vendor specific designs .... and defect management is a trivial task for that software.

Defect mapping is an integral part of every operating system, and you will find it to cover for faults on floppy media, optical media, and even hard drives .... it's part of most filesystems. Providing defect mapping generated keep out zones on the fpga for place and route is rather trivial. That is a very small price to pay to have access to large numbers of relatively inexpensive FPGA's. Anything that will allow effectively higher yields will lower the prices for RC computing based on defect management, AND lower the price for zero defect parts where the design and deployment infrastructure is unable to handle defect mangement due to fixed bitstreams.

For RC ... not long at all. Which is why different strategies which are baed in fast acceptable placement and routing, with dynamic clock fitting are better for RC, while extensive optimization for fixed bitstreams used in embedded applications need the tools used today. RC has very very different goals .... bitstreams whose life may be measured in seconds, or hours, maybe even a few days. Embedded is trying to optimize many other variables, and for the goal of using bitstreams with lifetimes in years.

Much easier said that done, and loaded with the same problems that dynamic sparing has in disk drives. To access a spared sector requires a seek, and rotational latency loss TWICE for each error .... huge performance penalty. Ditto for FPGA's when you have to transparently alter routing to another LUT in an already densely routed design.

Defect tollerant is a completely different strategy where place and route happens defect aware. It's actually not that difficult to edit a design on the fly .... using structures similar to todays cores, which are linked as a block into an existing netlist. That can happen both quickly and distort the prelinked/routed object during the load process to effect the remapping around the failed resources.

Anyway ... zero defect designs need zero defect parts, systems designed around defect tollarent strategies are built from the ground up to edit/alter/link the design around defects to avoid them. This could be done using a soft core or a $1 micro on board with the fpga for embedded designs that do not want to suffer the zero defect price premium.

With todays ISE tools ... that is certainly true. Using custom JBits style loaders, such as found in JHDL and JHDLBits, really a piece of cake using mature tools that have been around for many years on the educational side, with some small tweeks for defect mapping. All the same tools the FpgaC project needs for compile load and go to an FPGA coprocessor board.

Tiny relative to the size of the FPGA universe today ... sure. Tiny in terms of dollars and importance certainly not. Completely disjoint to embedded fpga design today ... different customers, different designs, different cost structures, different applications.

Reply to
fpga_toys

Sorry ... I would have swore I remember a post where you claimed at least the number of years in the industry as I.

Reply to
fpga_toys

Jim, pretty unlikely. It would only make sense on the really big devices. Smaller devices do not need such a crutch. At the high end, we now have 80 million configuration bits, each of them responsible for one tiny aspect of the functionality. Tough job to keep track of that. And how can the user work around it ? When the chip is simple, it does not buy you anything. When it is really big, the work-around methodology chokes on its complexity. Our production testing is always go/no-go, which means we do not even try to identify the failure, it's just either perfect or scrap. Even the EasyPath testing is that way; but after a more restricted test that gives much improved yield. (Such a clever concept!) Very regular structures (like memories) can use self-repairing non-volatile fixes. Altera's earlier FPGAs (called CPLDs) had a regular interconnect structure that allowed such redundant repair, and most big memories do it. So selling parts with non-functional circuitry on it is neither new nor unusual (nor unreliable). But most FPGA structures are too complex and irrgular to allow that, IMHO. Wherever the IC manufacturer can really make the repair transparent to the user, it will obviously be done. Peter Alfke

Reply to
Peter Alfke

A few points:

1) The routing structure is many times larger than the LUT structures. A defect in the FPGA is far more likely to show up in the routing structure, and it may not be a hard failure.

2) The testing only identifies bad devices. It does not isolate or map the exact fault, to do so would add considerably to the tester time for a part that can't be sold at full price anyway.

3) Defect map dependent PAR is necessarily unique to each device with a defect, so you wind up not being able to use the same bitstream for each copy of a product. Fine for onesy-twosy, but a nightmare for anything that is going into production. The administration cost would far exceed the savings even if you get the parts for free.

4) Each part would need to come with a defect map stored electronically somewhere. Since the current parts have no non-volatile storage, that means a separate electronic record has to be kept for each part. This is expensive to administer for everyone involved from the manufacturer, the distributors, and the end user. Again, the administration costs would overshadow any savings for parts with a reasonable yield.

5) Timing closure has to be considered when re-spinning an FPGA bitstream to avoid defects. In dense high performance designs, it may be difficult to meet timing in a good part, much less one that has to allow for any route to be moved to a less direct routing.
Reply to
Ray Andraka

Intermittant failures on all mediums have been a difficult testing problem, but is something that can reach closure if part of the system design includes regular testing. This would have to be part of idle activity for a reliable RC system design.

I suspect that this would not be a tester project, but more like specialized board fixturing that would facilitate loadable self tests under various voltage and temp corner cases. That is significantly cheaper to implment for the RC board vendor.

That was addressed initially. For RC using incremental place and route for fast compile, load and go operation a keep out zone is really no different than an existing utilized resource that can not be used.

For more mainstream production use, I suggested that the go-nogo testing of the part look for errors in 16 sub quadrants, and bin parts failing to each. That would allow purchasing a run of parts which all had different failures in the same sub quadrant, and the rest of the die was known good and usable. That is much more manageable, without creating too many sku's.

For RC systems that would have to be addressed on a system by system basis, as part of the host development software ... not a big deal.

individual resources faults at a detailed level for embedded applications is quite unrealistic, which is why I suggested subquadrant level sorting of the parts.

Certainly. I've suggested several times that RC applications may well need to actually assign clock nets at link time based on the nets linked delays, and choose from a list of clocks that satisfy the timing closure. I have this on my list of things for FpgaC this spring, along with writing a spec for RC boards suggesting that derived rising edge aligned clocks be implemented on the RC board covering a certain range of periods. That would allow the runtime linker (dynamic incremental place and route) to merge the netlist onto the device, and assign reasonable clocks for each sub-block in the design. This is necessary to be able to reuse libraries of netlist compiled subroutines for a particular architecture, across a number of host boards and clock resources.

A very different model of timing closure than embedded designs today.

Reply to
fpga_toys

John,

The long post in response to mine is honestly the first rather level-headed discourse I've seen from you. Sincere thanks for taking the time to put together a constructive post.

Just to clear up the one point you had question about, when I suggested you called their business methods a scam I was referring to your utter disbelief that the EasyPath model made money - that any 80% discount meant that they were dumping parts. Dumping is illegal and any smoke and mirrors that provide dumped parts to customers is a sham. It's my own belief that they have a solid business model with significant ROI without obliterating the margins; the result is more customers using Xilinx in high production with significantly lower per-device infrastructure and support costs. If you already have the silicon and IP, get paid to customize tests, and reduce the cost associated with getting a part out the door, you have great ROI - you've invested very little to support this business model that wasn't already invested. Big, incremental business is tremendous to have.

I hope you have the opportunity to get Reconfigurable Computing up to the level of performance and supportability that you envision. It may be a tough road because the market is small. The market may demand the higher premium devices to support the efforts from you and other RC advocates which may get you some attention from strategic marketing folks that help shape the business decisions on development. Unfortunately, Peter and Austin are not Strategic Marketing employees but instead are involved with support of existing and evolving devices that are about to hit the market. This forum has the wrong audience for actively changing where Xilinx is going.

I don't fault any one car company for not having the features that I feel would make my driving experience so much better. If I felt strongly about my position, I wouldn't get much activity at the dealership or on the technician's bulletin board where the nitty-gritty details are known to so many. Direct exposure the appropriate Xilinx people is about the only way to truly effect change. This is only my own opinion, of course - I don't pretend to know everything that goes on in the industry but I do have my own perspective. In the corporation I work for, our two dozen or so hardware engineers have had the opportunity to meet with some of the VPs in Xilinx as they give us the direction they see their next products going. We've even had Xilinx CEO Wim Roelandts visit us here in Oregon. If you can get your Xilinx sales engineer and/or FAE to understand your needs and the potential market you feel is there not only for you but for others that could leverage tools and silicon tailored for better RC, you might have a chance to shape the vision of those who shape the direction of Xilinx.

As helpful as they are and as respected within their own corporation they may be, the folks who participate in this forum are not the ones who shape the vision - they may have influence, but it's not the influence you need to push for better RC support, tools, or "permission" to do what you feel needs to be done to blaze the trail.

I've seen Peter and Austin have troubles when dealing with stubborn people through the limits of the newsgroup. I have troubles with people myself when there's obstinance, dimwittedness, or just plain insulting behavior. I've never had a problem with Peter. When you annoy one of the most level-headed, market-experienced technical people I've had the chance to meet, it's time to reevaluate your own stance.

If all your communications were as civil and well considered as the one I'm now responding to, you may have gotten a lot further with the limited influence available through this forum.

I wish you luck in your endeavors and hope you have a chance to realize your visions.

- John Handwork

fpga snipped-for-privacy@yahoo.com wrote:

Reply to
John_H

I work for a fabless semiconductor company (though I'm not speaking for them), and I can confirm that testing is expensive. Sometimes after a product has already been in production for a while, we're able to come up with a new set of test vectors that provides the same coverage in less time, and this *significantly* reduces our costs.

Every second a part spends on a tester costs real money, and a big FPGA probably spends a LOT of time on a tester. I'd be surprised if Xilinx didn't employ a fair number of engineers whose entire job is optimizing test vectors for high coverage in short test time.

Eric

Reply to
Eric Smith

Yes, Eric, testing is expensive. But the most important savings for EasyPath are not the reduced test times, but rather the much higher yield. The math is really simple: If the "perfect" yield for a specific very large chip is 50%, then if half of the resources are unused and thus need not be tested, the yield will be 75%. If 90% of the resources are unused (happens quite often) and untested, the yield will be 95%. Always assuming random fault distribution. This also shows that EasyPath makes little sense when the fundamental yield is already 80% or higher. Of course there are other cost factors, like packaging, marking, selling etc. But the fundamental idea is really clever. I wish I had thought of it... Peter Alfke, from home

Reply to
Peter Alfke

John,

It appears to me that perhaps you are assuming the yeild is high, more than 50% anyway. What happens to your assumption if the yield is more like, say, 10-20%. It seems to me that the lower the yield, the more attractive easypath becomes, especially if, as Austin indicated, most yield fallout is only one defect.

That aside, coming up with an end user test to find and isolate the faults is not as trivial as it may seem. Remember, the LUTs and other assets on the FPGA are only a very small percentage of the total circuit. The lion's share of the chip is dedicated to routing resources, which are a bit harder to test and isolate than the luts are for a time constrained test vector suite. The good news is that it is quite rare for an FPGA that tests 100% good to fail internally in the field, so once defects are mapped, the map should be good from then on. still, developing a set of configurations to test every route, every routing switch box, etc in a device is a daunting task by itself. It is several times harder if you then have to isolate the exact failure. Maybe if you have enough spare cycles in your RC system, you can do that in the background and hope you don't hit a defect in operational builds before the defect map for the system is completed.

The current tools make this even harder since the user has little control over the what routing resources are used (there's directed route, but it is tedious to use and is a largely manual effort), and even less control over what routing resources can't be used. Granted, this is a tools issue more than anything else, but the fact remains that with the current state of the tools, I don't see this as feasible right now. Yeah, I know, this supports your contention that the tools should be open.

Look at it from Xilinx's point of view though. What is in it for them? More software that would need development and testing, more user support, devices with defects out on the market that could wind up in the hands of people thinking they have zero defect devices, not to mention their increased testing and administration cost to even partially map the defects, or even determine to what degree the part fails. I can see where the cost of doing it could exceed the potential benefit. If it were profitable for them to do it, I'm sure they would be pursuing it. In any event, it is a business decision on their part; one they have every right to make.

Anyway, it still seems to me that the amount of extra work to manage parts with defects would cost more than the cost savings in the part prices not just for XIlinx, but also for the end user.

Reply to
Ray Andraka

Yes, getting more managable, and the new Xilinx strip-fpgas could lend themselves to this - but you still need some audit-trail to link the defect to the part = so this really needs FPGAs with fuses. (not many, and they can be OTP, but fuses nonetheless)

Another path, would be to do runtime checking of results, and have a 'bad answer' system, that remaps the problem to known good ALUs.

This would require good intital tester code, which could, as suggested, also run in the downtimes.

That way you can use lower yield devices, but not have to know explicitly ( at P&R time ) where the defects are.

Of course, a method to tell the P&R to avoid known 'FPGA sectors' would also improve the RC yields, so a two-pronged development would seem a good idea.

Perhaps there are features in the new Virtex 5 that would help this ? [Should be a good supply of low yield parts, as they ramp these ! :) ]

-jg

Reply to
Jim Granville

Why does it matter to this discussion?

Xilinx isn't stupid. They will retest or recycle, whichever is less expensive (more profitable) overall.

--
The suespammers.org mail server is located in California.  So are all my
other mailboxes.  Please do not send unsolicited bulk e-mail or unsolicited
commercial e-mail to my suespammers.org address or any of my other addresses.
These are my opinions, not necessarily my employer's.  I hate spam.
Reply to
Hal Murray

Stupid isn't the right word. Complacent with their margins is probably a better descripton. It's why they are only a $1.3B company instead of $10-40B dollar company like Sun Microsystems or Microsoft which are similar ages. The founders had some great ideas 21 years ago, and other than incremental refinement, the real innovation in both the business plan and technology has been lacking a bit. The high margins and high costs hinder the growth of their market.

formatting link

http://72.14.203.104/search?q=cache:YsLX3NJSemkJ:

formatting link

My Idea of making Xilinx successful would be to once again aggressively push the state of the art and grow the company into several related markets That would bring their revenues into the 20B range inside this decade.

Reconfigurable computing as a market for Xilinx could have been grown to something in the $50B range by today, but they got stuck in their view of their business plan. I believe some new management, a restructured technology development program, and one could turn Xilinx around this year, and get it back on track as a $50B company over the next decade ... or better.

Or any of the A-team FPGA companies, and buy Xilinx at a discount for pennies on the dollar in 5 years.

Reply to
fpga_toys

You are assuming facts that are not in evidence. ;^)

Reply to
rickman

But in quite different fields, so impossible to compare.

Maybe Virtex 5 will turn all that around ?

This makes interesting reading

formatting link

and quite a contrast to Austin's original arm waving :)

Seems that yes, Xilinx is the largest Programmable Logic company, (which is not trivial, so applaud them for that ), but no, their growth is BEHIND the Fabless group's average of 10.4%, at a modest 3.7%. Adding $59M in revenue. [Still, it IS positive :) ]

Also the Fabless numbers seem to exclude larger companies ASIC flows, so the true ASIC market is rather larger again. ( eg IBM Microelectronics has a large chunk of ASIC flow in that revenue.... )

So, design starts in ASIC do seem to be falling, but the revenues seem to be growing faster than the programmable logic business ?

Not an easy pill for the spin merchants at Xilinx to digest ? :)

formatting link

http://72.14.203.104/search?q=cache:YsLX3NJSemkJ:

formatting link

Why not take them a sound business plan, I'm sure they would listen ?

They could seed this with some easypath FPGAs, and see how quickly you really can grow the RC sector.....

Programmable Logic has some fundamental limits, that will relegate it to a niche business. To hit $50B, you are talking about another Intel, or another Samsung, and that would need truly radical changes.

-jg

Reply to
Jim Granville

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.