Hi-Tech Software bought by Microchip - no more other compilers

Look at the requotes above, and reconcile "about to go bust" with "financially good enough to buy its competition".

Reply to
larwe
Loading thread data ...

PS/2

and

Goodness, so many claims. Does any of you have any backup?

Reply to
JosephKK

Try a remedial reading class. This time, stay awake.

Reply to
krw

That still needs something to trigger when the failure occurs. If you do that manually you'll be too late to get a proper stack trace.

--
Failure does not prove something is impossible, failure simply
indicates you are not using the right tools...
                     "If it doesn't fit, use a bigger hammer!"
--------------------------------------------------------------
Reply to
Nico Coesel

So interrupt controller A makes eggs and interrupt controller B makes bacon? Come on, put the bits in the right place and every interrupt controller gets you an interrupt.

--
Failure does not prove something is impossible, failure simply
indicates you are not using the right tools...
                     "If it doesn't fit, use a bigger hammer!"
--------------------------------------------------------------
Reply to
Nico Coesel

If you program an ARM like a PC then you'll run into trouble with the amount of memory. I did manage to get an SSL stack intended for a PC environment to run on an ARM controller but not without writing a managed malloc implementation first.

IOW PIC Is Crap (at least the 8 bit ones). One of the very few architectures that doesn't port C very well. A current project I'm working on involves a PIC16 and an ARM. Because of the crappy PIC16 architecture I have two different implementations of the same code for a communication protocol.

I never said C makes the low level details irrelevant. Don't put words into my 'mouth'! I don't think I even mentioned C before.

Anyway, IMHO getting a microcontroller going is a matter of figuring out how to program the damn thing (get your code in the flash), getting the toolchain setup, getting a port-pin to toggle from software. From this point you know the hardware is working at a basic level. From there you can start to setup the rest of the peripherals you need. I like to keep a positive perspective knowing that it is a process that can be applied to any microcontroller because they basically all do the same. Why should that be difficult? Or am I working with too many platforms?

--
Failure does not prove something is impossible, failure simply
indicates you are not using the right tools...
                     "If it doesn't fit, use a bigger hammer!"
--------------------------------------------------------------
Reply to
Nico Coesel

Actually Windows 1.0 was designed for the 8086. Some remnants of that legacy you can still find in the API today. For example the GlobalLock()/GlobalUnlock() functions were originally to have a sort of virtual memory on a processor that didn't have the features to support virtual memory transparently to the application programs.

Reply to
Dombo

Windows was most certainly dominant, but it was not yet a given that it had "won". Warp could do everything that current windows could do, and do it much better. It could do everything Win95 could do even though it was out a year before Win95 came out. And it was easier to use, and nicer to look at, as well as having more functionality, and being more secure (DOS and Windows viruses came on disks rather than email in those days, but were still prevalent).

There were some limitations on hardware compatibility and drivers, but these were fairly small and of little effect in the target group (business users). OS/2 did have greater memory requirements to run smoothly - preferably 16 MB, which was a lot at the time (NT 3.51 was similar).

The key was software compatibility. As I said, OS/2 was much better for running Win16 and DOS programs than any other system. It could not run Win32 software (the NT 3.51 api), because MS wouldn't give out the details. But MS and IBM had agreements on the Win32s API (32-bit data and flat memory addresses, but missing things like multi-threading) which was becoming popular for larger programs on Win3.1. Win32s programs ran fine on NT 3.51, at least as well on OS/2, and acceptably on Win3.1 (if you didn't need to do anything else at the same time).

There were hopes and plans from both MS and IBM (and others, including

*nix vendors) that the software industry would solidify around a few standardised APIs (Win32 and posix in particular) so that any modern operating system would have an implementation or translation layer for all these APIs and be able to software written for any API. That way people could pick the OS based on things like hardware support, easy of use, and OS-level features without having to worry about software compatibility. Applications might look a little out-of-place if they were not native, but they would run. As an example, NT has limited posix support, and can also run OS/2 16-bit command line binaries (I don't know if that support still exists in NT's descendants).

When MS saw that OS/2 was actually gaining noticeable market share (even though almost no one sold pre-installed machines), they changed tactics and killed it off. I believe their insistence that manufacturers (IBM included) paid for Windows licenses for each machine made, regardless of whether or not Windows was installed, was later condemned in court - long after the damage was done to consumers, suppliers, and the industry. PC manufacturers had to choose between selling only Windows machines, or selling no windows machines - guess which they choose? Major software developers were put under similar pressure.

Then there were tricks with software compatibility. I can't remember off-hand what version of Win32s OS/2 supported, but shortly after it was released, MS made a new version - the only difference was that the initialisation routine checked for OS/2 and gave a error message about requiring a newer version of Win32s. Thus any new software built with the latest Win32s would not run on OS/2.

Of course, MS didn't have to work /too/ hard to cripple OS/2. IBM's PHBs were doing a fine job themselves in various ways, and MS's marketing people could easily make the Win95 vapourware sound far more attractive than Warp.

In a sense you are right that windows had already won - history has shown repeatedly that those who partner with MS or trust in their promises and commitments will lose out sooner or later.

Reply to
David Brown

I didn't use OS/2 until Warp 3.0, and it allowed multiple DOS boxes. A particularly nice feature was that programs could get something like

720K conventional memory - much higher than with "real" DOS.

All manufacturers had to use "simplified accounting" (or pay a huge extra cost), which meant that if they wanted to ship windows on more than a few machines, they had to buy windows for all their machines. This was enough to stop OS/2 being shipped by non-IBM manufacturers because, as you say, it was an additional cost. But for IBM the cost of installing OS/2 would have been minimal since it was their system - there were extra clauses in the MS license deal with IBM to specifically exclude OS/2 pre-installs beyond the usual "simplified accounting". I don't remember the details at all, but I believe IBM got an even greater windows discount for installing windows on 100% of their machines - MS practically gave away windows licenses to IBM to avoid any competition.

Reply to
David Brown

Well they way I see it is you can do it the hard way or the easy way. I like the easy way. Means I can get something to market quickly, and more time for me or more time to develop new stuff for my customer. Yup, I can do the command line stuff, but if they take care of it for me its one less thing i need to do and the happier I am. Bring it on I say.

Popular is not all bad. Popular means they can throw more money at it and make it easier. Always a good thing. If you use the argument that idiots can do it, well true, but idiots cant make a good product and will always fail.

Reply to
The Real Andy

scripting.

Who said anything about manual? Any decent debugger/ICE will have ways of trapping on failures.

Reply to
krw

Just to clarify, OS/2 2.0 allowed multiple DOS boxes, 1.x didn't.

Right, (the extra costs being full licenses) that's a little different than what you said, above.

Some only shipped OS/2, which got them out of the M$ box, though that market wasn't big enough to sustain either OS/2 or those smaller box vendors.

No, that is not true (on several fronts). The cost was not minimal, since OS/2 still had to be purchased by the PC group from the IBM software group. Also, the PC software group had to buy WinOS/2 licenses from M$, effectively paying for two Win licenses. For Win->OS/2 V3x upgrades, the "red box" Warp got around the additional WinOS/2 license. Requiring 100% installs would have made it impossible to just ignore the DOJ. The results were the same, but the "discounts" for counting boxes shipped instead of installs gave M$ plausible deniability during the DOJ suit.

Reply to
krw

It takes longer than your "day" to get all the bits and options sorted out between architectures.

Reply to
krw

Purchasing tools may require a more complicated approval process, perhaps stretching beyond the engineering portion of the company.

Reply to
cs_posting

Perhaps what this really indicates is that companies are no longer hiring engineers who like the work enough to have previously pursued it for their own interest.

Reply to
cs_posting

I was pursing the debug-the-logic-on-a-desktop approach until we were temporarily bringing another developer on board. To support that, I spent an extremely long weekend at home writing scripts to get gdb- remote debugging working under an IDE. Adding the additional person failed; but have to admit I'm really using that debugging capability. I still build the code for linux often or occasionally windows if I don't have a unit handy (have both win32 and X simulations of the embedded keyboard and display that get built in the desktop version), but it was very handy to put the client in a conference room in front of the product and let them play with it, then when they don't like something run back to my desk, recompile and almost instantly remotely start the modified code under gdb on the box they are playing with. Technically a lot of that can be done just ftp'ing the code and running it without the debugger, but it's nice to be able to figure out what is going on when it behaves unexpectedly.

Reply to
cs_posting

Did you stop doing FPGA work on all but low-end-of-the-range devices, do you buy enough silicon they actually give you tool seats, or do you have some kind of revolutionary solution to share?

Reply to
cs_posting

Not everyone who writes PC software assumes unlimited RAM. Ironically, the issue tends to be more acute on servers, as the number of concurrent instances you can support is often directly dependent upon how much RAM each instance uses.

IOW, PIC is an 8-bit microcontroller, not a general-purpose computer, and can't (sensibly) be programmed like such. But this isn't specific to PIC; the first two computers I owned used the 6502, with 256 bytes of zero page RAM and a 256-byte stack. It wasn't sensible to program those in C either.

You did mention integrating PICC into the IDE. In any case, I can't see how anyone would say "seen one, seen them all" if they were programming in assembler, which makes the differences quite clear.

It's not difficult to get to the point where you can achieve results. But that isn't the same thing as "getting up to speed". The latter means having memorised the datasheets, knowing the libraries and frameworks, knowing what code the compiler will generate, etc. It means being able to write near-optimal code on the first try, rather than write code, examine assembler, modify code, repeat.

I know this quite well, as it's been a while since I've had the chance to get up to speed on much. Nowadays, every project seems to involve a bunch of new APIs, formats, and protocols. The early stages of the project involve 30 minutes reading for every 10 minutes coding. By the time I no longer have to look stuff up, the project's done and it's time for a new learning curve.

Reply to
Nobody

I consider the command line way to be the easy way. I have a LOT more control and I completely avoid dealing with IDE 'issues' when they happen.

It is the difference between using a Rube-Goldberg contraption or a simple tool. Compilers are very simple to use. So are linkers. They don't need all the foo-foo -- colored fonts, sizable fonts, dockable toolbars, print previews, and a bevy of things none of which have one iota to do with anything about compiling and linking. When you add all those extras, no matter what, there is a whole bag of software in there which can work poorly, incorrectly, well, or anything in between. And one can get caught up in trying to adjust this or that to "make it right," but it is all a waste.

By the way, if you ask any of the developers of these tools (and I have), you will get admissions from them that they spend something on the order of 80-90% of their VERY VALUABLE compiler knowledge time on all this dross. Which means it is time that does NOT go into the compiler or linker, where I'd MUCH RATHER it be spent.

Give me command line compiler/linkers, anyday.

Jon

Reply to
Jon Kirwan

I don't think of an IDE as a way to avoid compiler and linker command- line options (the IDE merely launches my script that drives the compiler, or for a production build, my makefile), I think of it as an editor backed up by an automatically generated database of useful program info, that also provides a source-level interface to the debugger.

I specifically dislike IDE's that force you to do things graphically. I want to use the IDE for development, but I don't want anyone to have to use it. Production gets an integrated firmware loader created by the makefiles. Another engineer needing to maintain the code can fire up the IDE, or they can search the source using any tool of their preference to find what they need to change, change it with the editor of their choice, and then run the makefiles.

Reply to
cs_posting

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.