Regarding calculation of free memory

In article , Drazen Kacar writes: |> |> However, I'm confused with the mention of "reverse engineering" in the |> above paragraph. For Solaris, at least, things one needs to do are fairly |> well documented[1]. It's also possible to automate the process, so one |> doesn't have to do it again and again by hand. |> |> Could you give me an example of something that required reverse |> engineering?

One simple example is including two files that logically look like the following:

extern int a = 1; extern int b = 1;

and:

extern int a = 1; extern int c = 1;

In the context of external references to a, b and c, which will you get (a) searched for and (b) overridden?

There are a zillion other examples, and I have seen at least a couple of dozen such issues arise in real programs.

The point is that Solaris, like all or almost all modern systems, does not publish a specification of its linker and loader. It provides a guide on how to use it, which is not the same thing at all. Solaris 2.6 and before had some SERIOUS bugs in this area, but it was very hard to get them reported because there wasn't any document to say what should happen!

Regards, Nick Maclaren.

Reply to
Nick Maclaren
Loading thread data ...

In article , "FredK" writes: |> |> To bring up an antique that gets it right, the VMS linker would |> complain that there were multiple definitions and refuse to produce |> an executable. Which from time to time generates complaints |> from people porting from UNIX. Of course, they apparently don't |> realize that what they are producing on UNIX may not be correct, |> or only correct by accident.

Without a precise specification, I would say that the very concept of correctness is absent. The best that can be hoped for is that it will do what the user intended it to do - if, indeed, the user had a definite intent in the matter. I have too often had people refuse to fix such inconsistencies because they were not prepared to make changes to code that they did not understand the intent of.

But the same problem can arise in 'correct' code, where some of those linkages are weak externals or weak definitions. Fortran COMMON is a particularly fruitful source of confusion, as it is not quite a definition, not quite a reference and neither weak nor strong. Mix that with other types of symbol, and try to work out what should happen ....

Regards, Nick Maclaren.

Reply to
Nick Maclaren

fairly

To bring up an antique that gets it right, the VMS linker would complain that there were multiple definitions and refuse to produce an executable. Which from time to time generates complaints from people porting from UNIX. Of course, they apparently don't realize that what they are producing on UNIX may not be correct, or only correct by accident.

But then again, I also have problems with the general UNIX/C case sensitivity - having too many times ported code that had routines with the same name differing only by case, and where code randomly called the wrong one (usually in an error path where nobody bothered to debug it - heck, even in a POSIX based conformance test suite!).

Reply to
FredK

Same for Solaris, but not when linking these objects dynamically where there are also matters as scoping (is the object visible or not) and ordering are important (you are allowed to replace certain functions in the C library, after all).

a.c: b.c: ld: fatal: symbol `a' is multiply-defined: (file a.o type=OBJT; file b.o type=OBJT); ld: fatal: File processing errors. No output written to a.out

That's not just an issue with case sensitivity but a similar situations can arise with other naming schemes. (And this one source of confusion is easily recmoved using a coding style which specifies how to use case.

Casper

--
Expressed in this posting are my opinions.  They are in no way related
to opinions held by my employer, Sun Microsystems.
Statements on Sun products included here are not gospel and may
be fiction rather than truth.
Reply to
Casper H.S. Dik

In article , Casper H.S. Dik writes: |> |> Same for Solaris, but not when linking these objects dynamically |> where there are also matters as scoping (is the object visible or not) |> and ordering are important (you are allowed to replace certain functions |> in the C library, after all).

Precisely. And similar remarks apply to every modern Unix I have looked at, including Linux.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

Though some access points (WRT54G) already have sbrk(), and I'm sure it's not long until Blackberries have it too ;-)

--
David Gay
dgay@acm.org
Reply to
David Gay

Well, you haven't seen mine, then :-)

Regards, Nick Maclaren.

Reply to
Nick Maclaren

By dynamic linking, I assume some sort of runtime activation of a DLL? Not exactly sure what is meant.

I've yet to find UNIX code that sticks to a convention, and that is remotely safe from boneheaded mistakes. Probably 80% or more of the code I've seen doesn't use C prototypes, which would have at least pointed out they wanted Delete() instead of delete() - each of which had different parameters (for an example).

Reply to
FredK

Yes, I would imagine my wristwatch (were I to own such a thing) might soon implement the whole of the C library and allow me to run my own code via WiFi or some such ;-)

--

  ... Hank

http://home.earthlink.net/~horedson
http://home.earthlink.net/~w0rli
Reply to
Hank Oredson

Sigh. This is getting tedious, so this will be my last response.

Most of these 'solutions' work, in that they do something that MAY help. But the precise behaviour of the system and those facilities is rarely, if ever, documented and they are more likely to do something that appears to do what the user expects (or is needed) but actually is not. This very often means that a a program checks out perfectly, but then misbehaves in real use.

For example, that facility doesn't provide all of the information that may be needed to answer the nastier questions I have described and, anyway, is data dependent. So that you have to check it every time you run your program to be certain it is doing what you expect. That is ridiculous. For example, consider:

franklin-2$LD_DEBUG=bindings ls /dev/null 2>&1 | egrep 'calling|open'

13010: calling .init (from sorted order): /usr/lib/libc.so.1 13010: calling .init (done): /usr/lib/libc.so.1 13010: calling .fini: /usr/lib/libc.so.1 franklin-2$

versus:

franklin-2$LD_DEBUG=bindings ls -l /dev/null 2>&1 | egrep 'calling|open'

12852: calling .init (from sorted order): /usr/lib/libc.so.1 12852: calling .init (done): /usr/lib/libc.so.1 12852: binding file=/usr/lib/libc.so.1 to file=/usr/lib/libc.so.1: symbol `_open64' 12852: binding file=/usr/lib/libc.so.1 to file=/usr/lib/libc.so.1: symbol `_open' 12852: calling .fini: /usr/lib/libc.so.1 franklin-2$

To the best of my knowledge, there is no decently encapsulated tool for modern Unices that will tell you the dependencies of an executable in enough detail to predict exactly how the binding will work. So, if you need to check that an executable is safe when used on another system (or even by someone else!), you are stuffed. You would clearly be surprised how often I see binding bugs in vendors' own software run on vendors' own systems, caused by a 'transparent' upgrade, and the main reason that it is so common is that it can't practically be checked.

Real software engineering, back in the days before it was called that, was about writing programs to be used by other people and often in unknown environments. I am aware that this is not a concern of home hackers and similar (including most computer scientists).

Regards, Nick Maclaren.

Reply to
Nick Maclaren

If they code to that standard, it is more likely to be someone else's code run without your permission :-)

Regards, Nick Maclaren.

Reply to
Nick Maclaren

I'm curious where such code comes from - can you say? Most (nearly all?) of the code I see, i.e., mostly open source code, has prototypes.

-- David Gay snipped-for-privacy@acm.org

Reply to
dgay

In article , snipped-for-privacy@barnowl.research.intel-research.net writes: |> "FredK" writes: |> > |> > I've yet to find UNIX code that sticks to a convention, and that is |> > remotely safe from boneheaded mistakes. Probably 80% or more |> > of the code I've seen doesn't use C prototypes, which would |> > have at least pointed out they wanted Delete() instead of delete() - |> > each of which had different parameters (for an example). |> |> I'm curious where such code comes from - can you say? Most (nearly all?) of |> the code I see, i.e., mostly open source code, has prototypes.

Older code, mainly. Most open source code has been using ISO C (or what it thinks is ISO C) for only 5-10 years - before 1995, there were a LOT of systems which didn't have an even remotely conforming compiler.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

Well yes, but we are 10 years after that. And most of the code I saw in the early 90s (again, mostly open source) used, at least, conditionally compiled prototypes. gcc was available for most platforms in that time frame. And conversion of existing code bases using tools like protoize was fairly straightforward...

--
David Gay
dgay@acm.org
Reply to
dgay

Hang on. That is ONLY 10 years - most software has a much longer life than that - even source unchanged!

Firstly, gcc wasn't available for most platforms before 1995 - it was available for the platforms used by most users - not the same thing, at all, at all. And one of the most important platforms (Solaris) didn't support ISO C until 1998 and not fully until a couple of years back.

Secondly, conversion of clean codes using protoize was possible, but why bother unless you were starting an update cycle? ISO C89 supports the older form of function definition.

Thirdly, conversion of some codes (mostly, but not all, unclean) wasn't possible because protoize couldn't (and probably can't) handle anything other than simple code. I wrote some incredible preprocessor hacks to make X11.3 usable (don't ask), which used prototypes and conformed to the draft standard, and protoize (really don't ask) took one look and collapsed in a heap.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

of

However, gcc was essentially always available for Solaris (their may have been a short delay after the first Solaris release, but I didn't notice it at least).

However, if you're not updating it, why is anyone looking at it?

I don't think X11 is typical of "most codes", though. Isn't it the one which gives different signatures to the same function when viewed from different modules?

--
David Gay
dgay@acm.org
Reply to
dgay

Are you talking about Linux or Unix in general ?

At least I had to convert some sample client code for a communication package to K&R so that the primitive compiler intended mainly for compiling the Unix kernel cold compile it. In these systems in which the client was intended to be used, the customer did not use these systems for program development but only for production, so any "modern" compiler would be missing.

Paul

Reply to
Paul Keinanen

The problem was in the libraries. Solaris didn't sort out even the basics until 1995/6, didn't enable ISO C + POSIX codes until 1998 (Solaris 7) and didn't sort out all of the chaos until Solaris 9.

Still, it did better than a certain big blue company :-(

Probably. It has included every other sin, crime and idiocy, so why not complete the set?

Regards, Nick Maclaren.

Reply to
Nick Maclaren

I'm talking about the open source available in the late 80s onwards time frame, so mostly Unix in general rather than Linux in particular. And some amount was more cross-platform than that (e.g., emacs).

--
David Gay
dgay@acm.org
Reply to
dgay

That is why so many serious real time application do their own heap management. If the heap usage is simple, programming it yourself may be not even be hard.

Yeah, you have to avoid printf c.s. They are mostly in the user interface though. Most real time applications are split in a real real time part and the user interface that is not. If you adapt this, you can use printf freely in the user interface and the problem all but disappears.

--
-- 
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- like all pyramid schemes -- ultimately falters.
albert@spenarnc.xs4all.nl http://home.hccnet.nl/a.w.m.van.der.horst
Reply to
Albert van der Horst

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.