Larkin, Power BASIC cannot be THAT good:

e

Like in a #define?

Funny thing is, 24x60x60 =3D 86,400, actually...

Not likely that 60 seconds in a minute, or 60 minutes in an hour, would change... unless... the client moves from a 24-hour-workday to, say, an 8-hour work day?

Michael

Reply to
Michael
Loading thread data ...

"const int" is usually preferable since then the compiler will be able to perform type checking rather than just blindly inserting text into an expression.

Although I did once meet a C compiler that pretty much ignored the "const" part of "const int" and therefore actually allocated memory (in RAM!) and stored the value, fetching it again every time it was needed. (I believe it met the C99/C++ standard of not allowing you to actually change the value, though.) Made for slower code and wasted RAM... which there wasn't a lot of (whereas there was plenty of ROM). (It's also a case where you run the risk of "casting out constness," as Scott Meyers says.) I was pretty surprised...

When I'm using decent compilers I'll also use, e.g., 60*60*24 to make it more obvious where the "magic number" came from, knowing that the optimizer will figure out that the express is constant and not re-compute it every time.

And I agree that using "const int foo=86400; // 60*60*24" is just asking for trouble in that you're likely to forget to keep both "in sync" if it every has to change.

---Joel

Reply to
Joel Koltner

No, the point is to remove an opportunity for programmer error. Let the compiler do the math.

Reply to
AZ Nomad

Gawd, that old thing might actually predate SSE2 :) Since unfortunately the mplayer developers can't put together a release, their current SVN commit is the thing to use. Or get something vaguely recent, I guess for Windows you want to get SMplayer from

formatting link

Reply to
Anssi Saari

I could not agree more. But I also wanted to make the point that for a quick hack it can also be very powerful if you stay within the envelope.

I was intending to contrast the last two with the others which are robust but I see now that you could read it another way. It is the old ambiguity problem with natural language descriptions.

I'm not so sure.

I am not so purist as to insist on perfection. A write once and throw away program can be done in whatever language makes the job easiest. The only tricky thing is making sure that the code really works as intended.

Regards, Martin Brown

Reply to
Martin Brown

FPS? Well, Doom will do that today, easily. But I meant colors. ;-)

Tim

--
Deep Friar: a very philosophical monk.
Website: http://webpages.charter.net/dawill/tmoranwms
Reply to
Tim Williams

On a sunny day (Thu, 11 Jun 2009 16:42:10 -0500) it happened "Tim Williams" wrote in :

hehe :-)

Reply to
Jan Panteltje

It's required to store it somewhere so that you can take its address. "const" variables are still lvalues, not expressions.

Reply to
Nobody

I haven't conducted a rigorous survey, but that's my impression from being subscribed to BugTraq for the last 13 years.

Results 1 - 10 of about 154,000 for "sql injection attack"

Okay, so by page 10 it's lowered its estimate to 73,400, but it's still not exactly obscure.

Security doesn't matter if the program will only be run by its author on data created by its author; there's no mileage in exploiting your own account.

Unfortunately, code which reads and/or writes structured text formats is frequently used in exposed environments, either on web servers or for processing data obtained straight off the 'net.

Reply to
Nobody

A lot of the older compilers did that copy to ram even when they purported to generate code for embedded systems. The solution was to do the constant tables in assembler and then export a reference to them.

The physical address of a const in ROM is known at link time. And you can be sure that no attempt by the CPU to trash a ROM value will ever succeed. The same cannot be said of a value in ram if things go haywire.

This immutability of ROM did cause amusement in bank switched user register CPUs like the 9900 and 99k. You could tell it was in big trouble if the user register bank was in ROM where incrementing the program counter doesn't work any more.

I never appreciated at the time just how good the 99k was at interrupt handling until I tried to do the same job on a 68k series.

Regards, Martin Brown

Reply to
Martin Brown

I think I will concede the point entirely where PHP Internet exploits are concerned that is a lost cause. But I am not sure if the problems are the fault of processing structured text by string manipulation or inadequate safeguards in the script language.

Yes. I had in mind jobs which are essentially turning the rubbish formatted dump of raw data that some manufacturer outputs into a format that is useful to their customer. You would not believe the number of expensive instruments that output measurement data in the most user hostile and bulky formats possible.

And that is where it is very prone to abuse. I am still inclined to think that the problem is more with the current implementations and sheer number of bad exploitable scripts around than with the concept itself. Easy to use tools without safety guards is never a good idea.

Regards, Martin Brown

Reply to
Martin Brown

You are confusing "excellence" with quality here.

If they did not produce a product with *adequate* quality then customers would not buy it and the company would not make a profit. Microsoft must be doing something right since the suckers are all buying Office 2007.

But that is pure market economics. If it is cheap to fix in the field then it ships earlier and the company banks the cash. The CEOs duty is to maximise shareholder value and his own bonus (sometimes mostly the latter). CPU designs are even more carefully checked and simulated before they go into production on a fab line. It all depends on the upfront capital costs to manufacture and the cost of fixing in the field. You may not like it, but when the in service fix is almost free to the supplier then they will exploit that to their advantage.

The company managements job is to find the line between a perfect product delivered too late to make any money at one extreme and a buggy one that fails to sell because it doesn't work.

You should probably acquire some better in circuit debugging tools then. Post mortem and realtime debuggers have advanced a long way. Even the humble PICs have advanced and cheap debuggers available these days. Bugs are always cheaper to fix the sooner they are caught so it makes sense to use the best tools available for the job.

Type and ignite has never been a good method. The coherence length of code written at a terminal has a bad habit of being equal to the number of lines that fit on a display page. Everyone does it for tiny throw away programs but it isn't advisable for anything bigger.

Some of the complexity is unavoidable in very large systems. Your experience with one programmer few months projects does not scale well.

A rough guide is that every attempted non-trivial bug fix in a large project has a 50% risk of causing an unwanted side effect. It means some are worth leaving in if there is a workaround.

I always recall IBM's FORTRAN G compiler which was so uncertain of its world view that the result of a successful compilation was: NO DIAGNOSTICS GENERATED?

Debug showed that the string length of the message was miscounted and the trailing NUL was being printed as "?". Rumour had it that their change procedures made it too onerous to correct this minor cosmetic fault.

Regards, Martin Brown

Reply to
Martin Brown

Sigh. I suppose "quality" now means that you can ship it and charge for it and expect thjat it won't crash too bery often and that your customers will find the remaining hundreds of crashes and security vulnerabilities by experiment.

There are a couple of reasonable responses to this: use simpler, preferably free, stuff that does the basic job; and if your PC works and is finally mostly stable, don't upgrade.

So where are the class-action lawyers when we really need tham?

Well, I just think that the tradeoff could be shifted a lot in the quality direction, without extra expense, and with *sooner* deliveries, with different methods and culture. I don't think that security vulns are a marketing tool, especially when everybody now expects that the next release will be just as bad.

The world has suffered mightily from the personality defects of Bill Gates.

My embedded stuff works just fine. Things get done quickly and ship bug-free, because I'm careful.

My on-screen stuff is mostly engineering utilities or one-time calcs; I'd never ship quickie things like this. But of you're arranging a screen display, the easiest thing to do is run the code, see how it looks, and tweak for beauty, as opposed to planning every character location in advance. Small design iterations can take, literally, 10 seconds. Embedded stuff I write, assemble, read, tweak, read, until it looks perfect, before I ever run it. Reading is a much better way to debug than testing. Reading is also a lost art in many circles.

In big systems, roughly half of the bugs are invisible, being module interactions, so no module author is likely to spot them by reading his code. So bugs are found by testing, many of that testing done by users. There must be a better way.

John

Reply to
John Larkin

realistically if

else.

readable

Interesting, i just checked the multiply and got the familiar 86,400.

25480 seems to be an overflow result. With two 60s in the multiplier chain, two zeroes at the end of the product is expected.

YMMV

Reply to
JosephKK

Yes. Pretty much that is the decision that the bean counters and suits whose bonuses depend quadratically on shipped product value will take.

I am not defending these practices I am merely pointing out how it is.

That is fine for shrink wrap consumer software but it doesn't even begin to address the issues of major commercial projects.

I hope your defect rate when typing is a lot better when you write programs!

The trouble is that we tend to hope and pray that the next version will be better. Vista was a dog following after XP which was pretty usable and is still being installed on new kit by savvy corporates.

The next 'Doze version does look like it might be an improvement only time will tell.

Digital Research were too cocky by half with IBM over MSDOS. They let Gates get a foothold and win the deal. ISTR IBM licenced MSDOS for $1 per PC through not realising quite how many PCs would get sold.

It isn't just Microsoft. Although they do seem to have a higher defect rate than is ideal they also have some very talented people like Steve McConnell who have written very sensible texts on best practice for defensive programming. Trouble is that it all gets forgotten in the final rush to finish, test and ship. Sad thing is the humble programmers typically just get stuffed with free pizzas, jolt and unpaid overtime.

It is a great shame IBM & MS fell out over OS/2. That platform was very close to being a robust OS for consumer PCs but Windows was flashier and the world chose slick Willys Windows over IBMs staid OS/2. I think there may still be a few banking systems and airtraffic controls running on OS/2.

And the project size is relatively small.

I agree. An annotated paper listing or a diagram is hard to beat.

This isn't entirely true. Most of the big project integration faults stem from ambiguities in the original specifications but they hit home only when the modules interact. A lot of errors could be found earlier by the right sort of inspections and walk throughs.

But you do still have a big problem with system integration of large projects where N modules have N(N-1) possible interactions. For N=1 or 2 this isn't a big deal for N=100 or N=1000 you have to be exceptionally careful to avoid unintended side effects.

Regards, Martin Brown

Reply to
Martin Brown

When I'm at home, I tend to slouch and prefer dim lighting. And I never learned to type. When I write code, I spell check and re-read and fine-tune everything, several times, including comments. Actually, re-reading is a great way to find bugs and optimize code. Typing slowly and re-reading makes the final code come out *faster*

I'm suspicious of people who touch-type pages of code at high speed and pretty much never look at it again until the bug reports become impossible to ignore.

XP is pretty good, after about a decade of debugging.

Yup. By choice. I don't want to be a cog in some machine or a Microserf.

I really want a 14" fanfold laser printer. It's not in this year's budget.

John

Reply to
John Larkin

Injection attacks arise from constructing structured text formats by directly manipulating the text. The code inserts a string at a point in the text with the intention that the string will correspond to a node in the parse tree. But if the string contains characters which are significant to the parser, it completely changes the overall structure. E.g. (using C syntax):

sprintf(cmd, "SELECT * FROM mytable WHERE mycolumn = '%s';", value);

This works fine until the attacker does:

/* note the unmatched single quote */ value = "foo';DROP TABLE mytable";

The intention was to "graft" a literal string at a specific point in the parse tree for the SQL command, but you end up with a completely different parse tree.

The reason why they're so prevalent in PHP is that the language is designed primarily for interfacing using textual formats, particularly HTML and SQL, but the language designers pushed the task of constructing the data (which is harder than it looks) onto the users.

A saner approach would have been to require SQL, HTML, shell commands, etc to be constructed through a DOM-style interface, with the language taking responsibility for generating text with the correct structure. Then the task would only need to be done once, by (hopefully) experienced programmers, rather than thousands of times by novices.

It's a lack of forethought (and simple laziness) from designers. Need to provide an interface to SQL databases? Easy:

int execute_sql(const char *cmd);

Need a way to run external programs? Even easier:

int system(const char *cmd);

Need a way to output HTML: document.write().

This approach guarantees that strings obtained from who knows where will end up being passed verbatim to the functions. Even more so when the language encourages the use of strings as the primary datatype.

Reply to
Nobody

Anyway, they were too busy debugging Fortran H--really an amazing compiler when it worked, but, *ahem* not their most robust offering. When I was an undergraduate, I learned Fortran 77 using Fortran G and H, debugging somebody else's published code for radiative transfer in interstellar giant molecular clouds. It eventually worked, amazingly enough--though I never did get that good at Fortran.

(I almost met John Backus once, does that count?) ;)

Cheers

Phil Hobbs

Reply to
Phil Hobbs

Thanks, that's a good point that I hadn't really thought about. Apparently the other compilers I've used have been smart enough to notice whether or not I ever actually did take the address of a "const" and, if not, just optimize out storing it in memory discretely.

Reply to
Joel Koltner

I had lunch with Walter Brattain.

John

Reply to
John Larkin

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.