Microsoft's FPGA Translates Wikipedia in less than a Tenth of a Second

I found this pretty impressive. I wonder if this is why Intel bought Altera or if they are not working together on this? Ulpp! Seak and yea shall find....

"Microsoft is using so many FPGA the company has a direct influence over the global FPGA supply and demand. Intel executive vice president, Diane Bryant, has already stated that Microsoft is the main reason behind Intel's decision to acquire FPGA-maker, Altera."

#Microsoft's #FPGA Translates #Wikipedia in less than a Tenth of a Second

formatting link

I guess this will only steer the FPGA market more in the direction of larger and faster rather than giving us much at the low end of energy efficient and small FPGAs. That's where I like to live.

--

Rick C
Reply to
rickman
Loading thread data ...

No, it may mean that Altera won't play there but someone surely will.

Reply to
krw

Hopefully it'll create a vacuum into which other companies will grow. Very possibly not without some pain in the interim. Markets change, we have to adapt.

--
www.wescottdesign.com
Reply to
Tim Wescott

Translates it where? Across the room? To what? Rot13?

--
John
Reply to
quiasmox

That's what I wanted to know. The article doesn't say.

Seems pretty useless like a lot of media blurbs about things the editors know nothing about.

boB

Reply to
boB

I've never been clear on the fundamental forces in the FPGA business. The major FPGA companies have operated very similarly catering to the telecom markets while giving pretty much lip service to the rest of the electronics world.

I suppose there is a difference in technology requirements between MCUs and FPGAs. MCUs often are not even near the bleeding edge of process technology while FPGAs seem to drive it to some extent. Other than Intel who seems to always be the first to bring chips out at a given process node, the FPGA companies are a close second. But again, I think that is driven by their serving the telecom market where density is king.

So I don't see any fundamental reasons why FPGAs can't be built on older processes to keep price down. If MCUs can be made in a million combinations of RAM, Flash and peripherals, why can't FPGAs? Even analog is used in MCUs, why can't FPGAs be made with the same processes giving us programmable logic combined with a variety of ADC, DAC and comparators on the same die. Put them in smaller packages (lower pin counts, not the micro pitch BGAs) and let them to be used like MCUs.

Maybe the market just isn't there. Many seem to feel FPGAs are much harder to work with than MCUs. To me they are much simpler.

--

Rick C
Reply to
rickman

Did you read the article? They are designing Internet servers that will operate much faster and at lower power levels. I believe a translation app is being used as a benchmark. It's not like websites are never translated.

--

Rick C
Reply to
rickman

As far as I understand it, there is quite a variation in the types of processes used - it's not just about the feature size. The number of layers, the types of layers, the types of doping, the fault tolerance, etc., all play a part in what fits well on the same die. So you might easily find that if you put an ADC on a die setup that was good for FPGA fabric, then the ADC would be a lot worse (speed, accuracy, power consumption, noise, cost) than usual. Alternatively, your die setup could be good for the ADC - and then it would give a poor quality FPGA part.

Microcontrollers are made with a compromise. The cpu part is not as fast or efficient as a pure cpu could be, nor is the flash part, nor the analogue parts. But they are all good enough that the combination is a

But I think there are some FPGA's with basic analogue parts, and certainly with flash. There are also microcontrollers with some programmable logic (more CPLD-type logic than FPGA). Maybe we will see more "compromise" parts in the future, but I doubt if we will see good analogue bits and good FPGA bits on the same die.

What will, I think, make more of a difference is multi-die packaging - either as side-by-side dies or horizontally layered dies. But I expect that to be more on the high-end first (like FPGA die combined with big ram blocks).

I think that is habit and familiarity - there is a lot of difference to the mindset for FPGA programming and MCU programming. I don't think you can say that one type of development is fundamentally harder or easier than the other, but the simple fact is that a great deal more people are familiar with programming serial execution devices than with developing for programmable logic.

Reply to
David Brown

The interim pain includes an almost total absence of tech support for the smaller users. The biggies get a team of full-time, on-site support people; small users can't get any support from the principals, and maybe a little mediocre support from distributors.

That trend is almost universal, but it's worst with FPGAs, where the tools are enormously complex and correspondingly buggy. Got a problem? Post it on a forum.

--
John Larkin         Highland Technology, Inc 

lunatic fringe electronics
 Click to see the full signature
Reply to
John Larkin

What's a "poor" FPGA? MCUs have digital and usually as fast as possible digital. They also want the lowest possible power consumption. What part of that is bad for an FPGA? Forget the analog. What do you sacrifice by building FPGAs on a line that works well for CPUs with Flash and RAM? If you can also build decent analog with that you get an MCU/FPGA/Analog device that is no worse than current MCUs.

It's not much of a compromise. As you say, they are all good enough. I am sure an FPGA could be combined with little loss of what defines an FPGA.

I know of one (well one line) from Microsemi (formerly Actel), SmartFusion (not to be confused with SmartFusion2). They have a CM3 with SAR ADC and sigma-delta DAC, comparators, etc in addition to the FPGA. So clearly this is possible and it is really a marketing issue, not a technical one.

The focus seems to be on the FPGA, but they do give a decent amount of Flash and RAM (up to 512 and 64 kB respectively). My main issue is the very large packages, all BGA except for the ginormous TQ144. I'd like to see 64 and 100 pin QFPs.

Very pointless not to mention costly. You lose a lot running the FPGA to MCU interface through I/O pads for some applications. That is how Intel combined FPGA with their x86 CPUs initially though. But it is a very pricey result.

The main difference between programming MCUs and FPGAs is you don't need to be concerned with the problems of virtual multitasking (sharing one processor between many tasks). Otherwise FPGAs are pretty durn simple to use really. For sure, some tasks fit well in an MCU. If you have the performance they can be done in an MCU, but that is not a reason why they can't be done in an FPGA just as easily. I know, many times I've taken an MCU algorithm and coded it into HDL. The hard part is understanding what the MCU code is doing.

--

Rick C
Reply to
rickman

The tools for the Zynq is complex. For the most part the FPGA tools are fine and not any more problematic than tools for CPUs. Its the funky interface stuff inside the Zynq that makes it complex. Its new so the tools aren't polished. Bottom line is they aren't going to spend some kilo bucks to support a sub thousand part user when they have mega part users they need to support. That's pretty universal, not just FPGAs.

--

Rick C
Reply to
rickman

Lattice has some pretty nice low-end parts. Altera did, too, but that'll probably change.

MCUs have a huge development cost/cycle. FPGAs are arrays of the same thing. Being arrays, they're easier to debug/test, as well. Telecom and defense.

The different MCU combinations are often the same chips with fuse-blows for the different configurations, not that this couldn't be (or isn't) done with FPGAs, as well.

Reply to
krw

It definitely is done with FPGAs. I make a board with a Lattice part that is EOL. The part is a 3.3 volt only version, but I had added a voltage regulator so the 1.2 volt core version can be used. With this last batch of boards the 1.2 volt version was much cheaper than the 3.3 volt version. But I didn't want to have to run the tools to produce a new bit file. The FAE said he thought they might use the same die and so would have the same bit stream most likely. I found what I had to change in the tools to get it to let me download the old version of the bit stream into the 1.2 volt device and it worked. So clearly the die is the same. There is a just bonding difference that bypasses the 1.2 volt internal regulator.

--

Rick C
Reply to
rickman

What is a "good" FPGA? It has fast switching, predictable timing, low power, low cost, lots of gates, registers and memory, flexible routing, etc. A "poor" FPGA is one that is significantly worse in some or all of these features than you might otherwise expect.

The digital parts of an MCU are fixed. Each gate in an MCU design is a tiny fraction of the size, cost, power and latency of a logic element in an FPGA. Just compare the speed, die size and power of a hard cpu macro in an FPGA with a soft cpu on the same device - the hard macro is hugely superior in every way except flexibility.

Now, I don't have any good references for what I am writing here - just "things I have read" and "things I know about". So if you or anyone else knows better, I am happy to be corrected - and if any of this is important to you (rather than just for interest), please check it with more knowledgeable people. With that disclaimer,...

There are important differences between the die stackup for FPGA design and other types of digital logic. The most obvious feature is that for high-end FPGA's, there are many more layers in the die than you usually get for microcontrollers or even fast cpus. FPGA's need a /lot/ more routing lines than fixed digital parts. These routes are mostly highly symmetrical, and can be tightly packed because only a small fraction of them are ever active in any given design - you don't need enough power or heat dissipation for them all. On a microcontroller or other digital part, you have far more complex routing patterns, with most routes being short distance and you can have most of them active at a time. On memory parts, you have a different type of routing pattern again - few layers, with a lot of symmetry, and a lot of simultaneous switching on some of the buses.

The point is, the optimal die stackup and process technology for an FPGA is different from the optimal setup for an MCU, a memory block, analogue blocks, etc. So when you combine these, you are making a lot of compromises.

It is relatively easy and cost-effective to take a big, expensive FPGA die design and stick a little processor on it somewhere. You can spread the normal cpu routing amongst the many FPGA routing layers for better power and heat spreading. It will be a little bigger and slower than a

extra cost is small in the total cost of the chip.

But you cannot take an optimised microcontroller or cpu die design and add serious FPGA hardware to it - you simply don't have the routing space. You can add some CPLD-type programmable logic without too much extra cost (look at the AVR XMega E series, or the PSoC devices) because that kind of programmable logic puts more weight on complex logic blocks and less on the routing.

Note that flash is also a poor fit for both MCU and FPGA die stacks. For flash, you want a different kind of transistor than for the purely digital parts, you have significant analogue areas, and you need to deal with high voltages and a charge pump. The match between an MCU and flash is not too bad - so the combination of the two parts on the same die is clearly a "win" overall. But if you want the best (cheapest, fastest, highest density, lowest power) flash block, you don't mix it with a cpu on the same die - similarly if you want the best cpu block. As far as I know (and as noted above, I may be wrong), Flash FPGA devices are made with a large FPGA die and a small serial flash die packaged together.

You get the same for analogue parts. You can buy devices that are good microcontrollers with okay analogue parts built in. You can buy devices that are basically high-end analogue parts with a half-decent microcontroller tagged on. But you /cannot/ buy a device that has high-end analogue interfaces /and/ a high-end processor or microcontroller, all on the same die.

It is just like PCB design. You do not easily match 1000V IGBT switchers, 1000-pin 0.4mm pitch BGAs, and 24-bit ADCs on the same board.

As I wrote above, the compromise is significant. It is certainly worth making in some cases - and I too would like to see such combined devices. And I think we will see such devices turning up - technology progress will reduce the technical disadvantages, and economy of scale will reduce the cost disadvantages. But it is not as simple a matter as you might think.

And then, of course, there is the joys of making tools that let developers work easily with the whole system - that is not a small matter.

I believe that what we will see first is something more like the above-mentioned Atmel XMega E series, or some of the PIC devices (AFAIK), where you have a "normal" microcontroller with a bit of programmable logic. This will give designers a good deal more flexibility in their layouts. Rather than buying a part with 3 UARTs and 2 SPI where one of the SPI's shares the pins of one of the UARTs, the developer could use the chip's pin switch matrix to get all 5 interfaces at once. Some simple PLD blocks could give you high-speed interfaces without external glue logic, and they could let the chip support a wide range of timer functions without the chip designer having to think of every desirable combination in advance.

No, it is a combination of many issues and compromises. When Actel saw the success of SmartFusion and thought how they could make a new SmartFusion2 family, they did not think "no one really wants analogue interfaces, so we can remove that" - they made the sacrifices needed to get the other features they needed. It was very much a compromise.

But you are right that the SmartFusion shows that combinations can be made - just as the SmartFusion2 shows that it is not a simple matter.

Indeed. And that is how it (currently, at least) must be if you want decent FPGA on the device.

The packaging is something that should be easier to change - there is no technical reason not to put the same chip in a lower pin package (as long as the package is big enough for the die and a carrier pcb, of course).

No, it is certainly not pointless - although it certainly /is/ costly at the moment. Horizontal side-by-side packaging is an established technique, and is used in a number of high-end devices. If you have a wide and fast memory bus, then the whole thing can be much smaller, simpler and lower power if the dies are adjacent and you have short, thin traces between dies on a carrier pcb within the package. The board designer has no issues with length or impedance matching, and the line drivers are far smaller and lower power.

Vertical die-on-die stacking is a newer technology, with a good deal of research into a variety of techniques. It is already in use for symmetrical designs such as multi-die DRAM and Flash packages. But the real benefit will come with DRAM dies connected to processor or FPGA dies. Rather than having a 64-bit wide databus with powerful bi-directional drivers, complex serialisation/deserialisation hardware, PLL's, etc., a 20-bit address/command bus with tracking of pages, pre-fetches, etc., you could just have a full-duplex 512-bit wide databus and full address bus, with everything running at a lower clock rate and data lines driven over a distance of a mm or two. Total system power would be cut drastically, as would latency, and you could drop much of the complex interface and control circuitry on both sides of the link. Your DRAM starts to look more like wide tightly-coupled SRAM - your processor can drop all but its L0 cache.

There are still many manufacturing challenges to overcome, and heat management is hard, but it will come - the potential benefits are enormous.

Reply to
David Brown

So which of these go to hell when you use a process in use by MCU makers? Heck, you mention Flash putting the clock back a couple of process nodes, but that is what I am using, Lattice Flash FPGAs.

None of that is relevant. The point is the process used for MCUs with Flash, RAM and analog is just as good for FPGAs if you aren't trying to be on the bleeding edge.

You said the key words... "high-end FPGA's" [sic]. I'm not talking about high end FPGAs. I'm talking about small parts at the low end combined with an MCU and analog. Even the Xilinx Zynq parts use very fast, very power hungry CPUs that require off chip memory. Totally different market... as usual, the telecom market.

Compromises, yes, "lot of"... I don't know. That's my point. They make those compromises for MCUs and seem to make it work. You have said nothing about why a useful FPGA can't be made using the same process as an MCU. You just talk about what they do when trying to squeeze every bit out of the silicon for the express purpose of large, fast FPGAs. Not every design needs large or fast.

This does not make sense at all. First, most CPLD type devices are actually FPGA type devices in a smaller capacity. Second, and FPGA uses die space. There is nothing magical about how much space is needed for routing or anything else. Just add a block of FPGA fabric to an MCU with an appropriate special interface and Bob's your uncle. The proof of the pudding is the fact that it has been done. My question is why this isn't done more often with a wider variety of parts, in particular more vendors.

Yes, none of these things want to be on the same die, and yet it happens. You are mistaken about the Flash FPGAs. Only Xilinx adds a flash chip to an FPGA chip in one package. That offers little advantage. The Lattice parts have the flash on the die and offer *much* faster configuration load times, on the order of 1 ms instead of 100s of ms.

So?

I don't know where you get the "significant" part. They sell literally billions of MCUs with analog on them each year. Obviously the compromise is not so bad.

Only if you try to make it complex and ugly. Interfacing an FPGA to a CPU is not hard.

You mean you have seen these before Microsemi (formerly Atmel) came out with their SmartFusion and SmartFusion2 devices?

What other features? Engineering doesn't dictate products. Marketing does. Clearly Microsemi feels there is not enough of a market to provide the "everything" chip. I've had this discussion with Xilinx and they don't say it is too hard to do. They say it makes the number of different line items they have to inventory far too large. That's not an engineering problem. That is exactly what they do with MCUS, dozens or even hundreds of different versions. It just needs to be what your company wants to do... as decided by marketing.

You are talking about an entirely different world than I am. You are still talking about the markets Xilinx and Altera are going for, large fast FPGAs. Only expensive parts can use multiple die packages and the large complex functions you are describing. That is exactly what I don't need or want.

Look at the data sheet for a 64 pin ARM CM3 CPU chip. You will find lots of Flash, RAM and analog peripherals. None of them work poorly. They sell TONS of them, literally.

MCU makers are afraid of FPGAs and don't have access to the patents. FPGA makers have their telecom blinders on and now, with Microsoft getting into the server hardware market, that may be the next big thing for FPGAs.

That's my point. There is no reason why smaller devices can't be made like the SmartFusion and SmartFusion2. Even those parts are more FPGA than MCU with hundreds of pins and large packages. I would like to see products just like a 64 pin MCU with some analog, clock oscillators, brownout, etc. There is no technical reason why this can't be done.

--

Rick C
Reply to
rickman

How many would you buy if the product existed? Do you know others with your kind of needs?

I'd say anyone who seriously wanted to do an "everything" chip thing would need to have both FPGA and MCU know how and customers. That would mean basically Xilinx buying up some MCU company or vice versa. Doesn't seem likely. Xilinx did announce some slightly lower end parts with the new Spartan 7s (with ADC) and single core Zynqs but they aren't even close to what you want. I think Altera's Max 10 is a little bit closer (flash + ADC).

Maybe if some MCU company bought Lattice? But why would an MCU company ever think they need HW programmability if they don't know a first thing about it? I don't think Lattice has the money to buy anyone. Intel's acquisition of Altera got them more high priced chip business which is what they seem want instead of cheap chips.

Reply to
Anssi Saari

I did read it. It was no more specific than the headline.

--
John
Reply to
quiasmox

Really? You have to buy an MCU company to add a CPU to an FPGA? Xilinx already makes the Zynq with an ARM. Seems adding ARMs to any digital chip these days is like falling off a log. No need to buy anything remotely like a company. Maybe a couple of good engineers from an MCU company.

I was not aware of the MAX 10. I don't peruse the Altera or Xilinx sites much anymore. The MAX 10 has an interesting ADC, but only one. It could be multiplexed and integrated to produce a pair of 48 kHz, 15 bit data streams. Not quite 16 bits, but close. No direct support for DAC and none of the other MCU features (that I could find in a brief look) like internal clock oscillator, internal POR, brownout detector, clock divider for low power operation, etc, etc, etc.

Or any company who makes FPGAs can integrate an ARM... oh, wait! All four of them have! Five if you count Atmel who made an old FPGA with an

8 bit processor (may have been an AVR).

The problem has nearly nothing to do with integrating a digital CPU into the digital FPGA, that's falling off a log. Making it work in place of an MCU with all the features is the part that seems to be missing.

I don't know for sure why this hasn't happened to date. There are the reasons I've been told and the reasons that I think. Who knows what they really are. I believe an FPGA + a full MCU would be a big winner. I have a board design that I was going to redo as the FPGA is EOL. The combo of an MCU, small FPGA (3000 4LUTs would be gravy for the fast interfaces) and the equivalent of a 16 bit, 48 kHz stereo CODEC would be all my digital logic in one package, well, all of it if the I/Os are 5 volt tolerant, lol. I have to use a pair of largish quick switches to interface some 10 signals from a 5 volt interface. Then it needs to be in an MCU package (64/100 pin QFP and/or QFN). The FPGA really doesn't need to be at all fast for most designs. If it ran half as fast as an FPGA in the same process node, that would handle some 99% of designs I expect. I've only worked on one where we were pushing the speed of the part and that was an existing part in a some 5-8 year old product, the TTC T-Berd. We were asking it to handle an interface that was 4 times faster than the interface it was designed to handle originally. Usually it is more the density, that is trying to use all the LUTs in the part. Usually there isn't enough routing to get much past 80 or 90 percent at best.

--

Rick C
Reply to
rickman

Same here. Just headlines and no content.

Reply to
o pere o

The content is that they are building very much faster servers as well as much lower power. What other content would you like to see? The development is not done but it is pretty clear they are on to something significant. No?

--

Rick C
Reply to
rickman

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.