Hi-Tech Software bought by Microchip - no more other compilers

hehe. Sorry!! And yes, I agree!!

Jon

Reply to
Jon Kirwan
Loading thread data ...

Several things have changed. Primarily, the FPGA vendors have cleaned up their acts. Their tools work. ...and the tools are free for not just the low-end devices, rather for all but the largest. No, I don't anticipate having to use the largest FPGAs anymore. I'm no longer in the land of infinite budgets and defense contracts.

Reply to
krw

You do realize there really are such things as antitrust laws? IBM was deathly scared of another runin with the DOJ. For good reason.

The bottom line is that one can't restrict the market as overtly as you give M$ credit for, and get away with it. ...and get away with it, they did.

Reply to
krw

Actually, I know a little (though not much) about such laws, and I /do/ know that IBM was paranoid about them. But you have to admit it sounds daft that IBM would have to pay themselves to install their own software on their own computers! And the anti-trust laws are designed (amongst other things) to stop abuses of monopoly - since IBM did not come close to having a monopoly on either the hardware or the OS, and would have offered a choice to customers, I don't see why there would have been a problem with very low costs for OS/2. But of course, I am not a lawyer, and I probably wouldn't understand the details if I heard them - I'm an engineer who is frustrated that a technically superior product failed through one company's criminal manipulation of the market, and another company's apparent stupidity.

What has come out in the court cases was some *very* overt market manipulation and control by MS ("simplified accounting" being one example) - they didn't need to go that much further. But MS have clearly demonstrated that they are happy to commit knowingly criminal acts in getting such control, and rely on either getting away with it entirely (certain US administrations have tending to let big, important companies get away with almost anything), or delaying cases and eventually paying a fine. From MS's viewpoint, it is still "getting away with it" even if they must pay substantial fines in the future - criminal market manipulation is just another type of long term business investment to MS.

Reply to
David Brown

Of course. Everyone else must pay, so... TANSTAAFL, is another good reason.

The reason for much of what IBM has done extend back to the '56 consent decree, which *was* a antitrust issue. The consent decree has only been lifted in the last few years.

It's not so much "administrations" letting them get away with anything, but that the government, in general, is simply clueless. The DOJ's case was weak because the issues were irrelevant.

Reply to
krw

Turning off optimizations never helps promote a compiler. It is counter productive for the compiler company. A couple numbers that are worth mentioning about compiler development is a good compiler for a new processor typically costs about 10% of the silicon development cost and currently generates about 1% of the total revenue.

The development teams are about two to one in size. The silicon development costs are dominated by technology costs and software tool costs. Compiler development costs are dominated by code generator design, and target specific test suites.

Regards,

-- Walter Banks Byte Craft Limited

formatting link

Reply to
Walter Banks

About half of our customers have company standard IDE's. that integrate their tools. They use tools sets that support error reporting standards and tool execution standards. Source level debugging is done using one of about

3 or 4 standard formats.

They do well at mixing compilers, processor simulators, system simulators jtag and other background mode emulators with a common IDE.

Regards,

-- Walter Banks Byte Craft Limited

formatting link

Reply to
Walter Banks

So raising the selling price of the silicon to get an extra 1% into the till (i.e. negligible) would offset the loss of revenue from giving away the tool, even if you ignore the increase in small volume sales that such a move would generate.

Reply to
rebel

Partly correct. The business perspective in much more complicated. It would mean contracting out compiler development and funding that at the same non revenue time that the silicon was being non revenue putting more financial pressure on the silicon company when they can least afford it.

You are correct in the long term.

The tools needed to support small volume sales effectively are different than the tools that are needed for large volume sales. The initial question is, "Why should that be?" Most of the differences are around source management, application metrics, code maintenance, debugging, validation and regression testing.

Small volume stresses fast time to market, solid reference designs low debugging costs and testing.

The needs of the two markets are quite different.

Regards,

-- Walter Banks Byte Craft Limited

formatting link

Reply to
Walter Banks

cular

rface

nd

ut

So many of these things become possible when you are willing to put that critical hacking session to make them work. For example, gcc can be launched out of Visual Studio just fine... if you write a script to reformat the error messages to what VS expects. Abandoned that long ago, but it was a useful capability for a while, building embedded code from the same environment used to debug a desktop build of the algorithms.

Reply to
cs_posting

e

=A0They

That is Eclipse for ya.

Eclipse is the basis for the Xilinx EDK's SDK. I like the debug features but the whole thing is hateful for version control (a million little XML files with no apparent rhyme or reason). And yes, when you remove a file from the project, it deletes it from the file system.

Also, consider that you have a directory called src where you keep your sources. Eclipse always assumes that if a file is in that directory, then it's part of the project. Why does this suck? Well, all sources are compiled. Since the MicroBlaze linker is too stupid to check to see if a function is actually called (no smart linking), everything compiled ends up in your executable. So if your sources are of the "one file one function" sort, a workaround is to simply not add the unused sources in the project. Which you can't do in Eclipse. And if you delete the source from the project, it's deleted from the file system and it messes up the version control.

Maybe this was improved in 11.2. Dunno.

-a

Reply to
Andy Peters

While I agree that it's bad that project files and files on the disk are forcibly tied, I believe that if a file is not part of a projects source code, it should not be in the source code directory! If a file is not part of your current program, don't keep it mixed up with the rest of the program. For my development work, I always compile using a makefile that compiles all .c files in the directory. For me, "project management" is simply file management.

When you take about a directory of files with one function per file, it sounds to me like a library. If that's the case, then just build it as a library - when you link to that library, only required functions will be included in the executable.

Finally, although I have never used the MicroBlaze, I believe the compiler is gcc. Depending on the version of gcc (and matching binutils), you might have the "-ffunction-sections" switch which puts each function in its own code section, and the -Wl,--gc-sections which tells the linker to drop unneeded sections. Alternatively (again, depending on the version of gcc) you might have the "--combine

-fwhole-program" switches which compiles everything at once, and treats all functions as effectively "static". Unused static functions will then be dropped. In other words, you *do* have smart linking - if the MicroBlaze gcc is a new enough version.

Reply to
David Brown

Well, that's cutting it a little to simple. Projects exist where not all source files are needed in every target being made.

Projects exist where it would be a complete nightmare if there were to be such a thing as "the" source directory instead of a strictly maintained tree of source directories. There's directory-based access control sometimes, or auto-generated source that has no business sitting in the same directory as the manually maintained one.

Reply to
=?ISO-8859-1?Q?Hans-Bernhard_B

That's all true enough - and methods that suit one type of project and one type of developer might not suit everyone. Let me just say that I think the ideal situation is a one-to-one correspondence between files in your source code directory/directories and the files that are compiled and linked into the project. But if your project structure does not make this practical, you will need to do something else.

Reply to
David Brown

IOW; it works, unless it doesn't work. If there is to be any reuse, you're going to have files that aren't used in a particular build or version. You certainly don't want source files spread out all over the place or worse, multiple copies of the same file.

Reply to
krw

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.