Being familiar with FPGA design and implementation flow and illiterate with ASIC corresponding one. My question is the following
what are the main similarities/differences between designing and implementing an algorithm on these two different targets? in particular what are the design/implementation practises and user tasks needed in ASIC and not in FPGA implementation and vice versa?
at the end what is the most widely tool used in asic?
A good RTL-code has no differences between FPGA and ASIC in general. You might use resources only available for a special type, but this is not a question alone of FPGA or ASIC, but also a question if your choosen technology provides special features like memorys, plls, dplls, IO-Buffers, clk resources,..... The difference between ASIC and FPGA is about the size as the difference between ASIC and ASIC or FPGA and FPGA.
The major difference is the processing after you have the netlist from synthesis. For ASIC you typically need to insert testing structures (Scan chains) which are not usual for fpgas. The layout and P&R for an ASIC may require much more effort than for an FPGA and after layout step you need to do post layout verification, produce masks for production, have the waiver run, packaging, testing unless you have an IC. This is _much_ more expensive in money and time as the layout, P&R and programming for an fpga.
Dificult question for all tools used in the designflow. Primetime for static timing analysis could be the tool with the higest market share under all tools needed for the ASIC design flow. Depending on the major design steps I would rank emacs or vi for design entry, modelsim for simulation. For synthesis it was Synopsys DC a few years ago, but I expect the tools for physical aware synthesis have outrun DC in the last five years. I don't know if Synopsys is still leader in ASIC synthesis, as synthesis needs today a good link to layout. Layout is AFAIK dominated by Cadence.
Verification. For an ASIC you really must be sure that you get it right the first time. Therefore huge efforts are spent on simulation, formal verification, monte carlo timing simulation for various process parameters and the like.
Otherwise HDL based ASIC-Design ist very similar to FPGA based design. For more advanced projects additional degrees of freedom can be used that require additional knowledge and tools like adding your own full custom cells, optimizing the clock distribution and power distribution, improving manufacturability, etc. But for most designs (as opposed to most chips produced) automatic tools are used for that.
- FPGAs come with IOBs (QDR/DDR/SDR, differential/single-ended, in/out/inout, etc.), with ASICs you have to license someone else's or create your own
- FPGAs come with some analog bits (PLLs, DACs, ADCs, MGTs, etc.), with ASICs you have to license someone else's or create your own
- FPGA resources are pre-placed and the tools have to map whatever you want to do onto what exists in the device, with ASICs you can have any primitive you can license or can design on your own
- ASICs know no such thing as a LUT4/LUT6/etc., you can have a 20 inputs combinational function with nearly no timing penalty since synthesis tools can create it as a primitive, unlike FPGAs where everything has to be mapped onto a tree of N-inputs functions.
- FPGA messes are inexpensive - fix the firmware and update... if a prototype releases the magic blue smoke, the worst that happens is you chuck out the $1k PCB along with an $5k FPGA and you swear to get the PCB right (not mix up power/ground pins) next time around. ASIC f*ck-ups after the masks have been ordered cost about a million (@90nm) and it takes at least a month (more like two) from re-tape-out to new silicon when everything goes well
Because all the very low-level "analog bits" like FFs, IOBs, PLLs, etc. are considered as high technological risk items, most small-medium ASIC firms who do not specialize in low-level components license other firms' designs instead of whipping their own... unless you want to enter the hard-macro library market, it makes no sense to waste millions on designing PLL/FF/IOB/etc. libraries if you can license silicon-proven ones for $500k. (This $500k figure is only a random believable number.)
So, for me, the most significant difference between ASIC and FPGA after the development cost and production timetable is having to license all the low-level nuts and bolts if I want to be able to concentrate on the higher-level functions I wish to implement without having to worry about going through a dozen re-spins just to get the low-level stuff like FFs and IOBs to work properly.
Or choose a ASIC technology that fits. I never bothered about IOs in an ASIC. DDR is so easy in ASIC, but may be hard if your fpga didn't support it (every fpga allows DDR as long, as your FPGA is _slow_ clocked, but you have trouble getting data in and out on a speed exceeding your Fpgas fmax if your technology didn't support it by dedicated HW (don't expect to have all designs on a virtex *g*).
you have to
Which FPGA provides DAC and ADC? Actel Fusion provides AFAIK only ADC and is the only available mixed signal FPGA known to me. I would have considered a PLL as a standard for actual ASIC technologies.
bolts if I
This is not true for the cell based design I worked on.
Mask-programmable cell/gate arrays are one step closer to being ASICs than partially mask-programmable FPGAs like Xilinx's EasyPath... but they are only mask-programmable devices and therefore not quite application-specific since, as you said, you cannot specify anything that does not happen to be part of the mass-prefabricated blanks.
To me, an ASIC is only limited by budget, power, surface area, the laws of physics, current process technology, IP licensing agreements and patents. A mask-programmable device is very ASIC-like for routing but is still limited by the blank provider's choice and placement of individual blocks.
bolts if I
low-level stuff like
If you work with mask-programmable devices, all the transistor-level and delicate timing details are hidden in the blank's cells and have been taken care of long before you instantiated or inferred that first NAND gate in the middle of your design. But before Fujitsu could start marketing those mask-programmable devices, Fujitsu engineers have certainly gone through many respins of these to fine-tune all available functions so their customers would not have to worry about them, just like foundries keep refining their process technology and primitives libraries so ASIC engineers can license them and not have to worry about wasting respins only to make them work since they have been extensively tested in the foundry's own fabs and silicon-proven in multiple other clients' designs.
For specialized functions, sometimes there are unforeseen complications such as Rambus refusing to work with a specific foundry to qualify their XDR and PCIe macros on your preferred foundry's 90nm process, in which case you end up having to can XDR, rework your design to use DDR2/3 instead and hunt down some other PCIe provider who will work with your foundry or already has a qualified macro... I wasted over a month updating test sequences affected by such changes and it took many months to iron out all the glitches caused by the memory controller swap. Designing around a piece of IP for months only to see the contract negotiations stall then fall through for arbitrary reasons makes working with hard-macro IP cores rather 'interesting'.