Software Reuse In Embedded code

And there is a *cost* associated with each "deviation" we encounter between the *expected* (and/or *documented*) behavior of those "chips" and their *actual* behavior. I suspect anyone who's had a development schedule ruined because of a vendor's screwup is *really* hesitant to climb back into bed with that same vendor!

There's a big difference between hardware and software.

First, most folks *can't* grow their own silicon. The bar is too high to acquire that set of skills/tools/personnel. In terms of *software*, it's HARDER to get a license to sell real estate (which is one of those "professions" that is "open to everyone, regardless of skills"... kinda like "used car salesman") than it is to call yourself a "programmer". I.e., for as little as $20 you can get a 2 year old PC, a friend's old copy of Windows N-2, a "free" compiler and a "Learn C in 24 Hours" book. Now, you're a "programmer".

Be honest, how many of your colleagues are formally qualified to be writing code? How many just "picked it up" without any real "education"? (at school we used to laugh at all the physics majors who ended up writing code for a living... never having taken any of the associated courseware -- but, needing a *paycheck* from *something*...)

[granted, you still have to get someone to *hire* you... or, maybe just WRITE SHAREWARE and hope someone incorporates it into their product and sends you a royalty check?? :> )

Second, there are lots of companies providing reliable hardware. If company A screws you, you limp through the project and then abandon company A on your next design! 30-40 years ago, it was a "necessary prerequisite" to announce multiple sources when you brought a new processor to the market. People *didn't* want to be at the mercy of a sole source supplier (pricing, availability, foundry problems, etc.). How many vendors of "reliable, ALL-PURPOSE software components" can you name?

Third, the flip side of the "low entry cost" argument applies. While its easy for John Q Public to call himself a programmer, it's also a lot easier for Joe Professional to *write* a piece of reliable code, "in house". You're not at the mercy of some vendor to provide you with a solution (or, an *alternative*, but WORKING, solution) for your "software componentry".

Fourth, it's a LOT harder to test software than hardware. The fact that so many software bugs get *through* testing attests to this. Hardware can be "put through its paces" in a lot more controlled way (e.g., by the vendor). There's a lot less "state" that affects the performance of the hardware.

Fifth, its a lot easier to come up with (an accepted) "general purpose" hardware solution/(sub)system than a similarly "general purpose" software solution. E.g., I was burning a DVD-ROM in Nero earlier today with files named:

  1. foo
  2. bar
  3. baz ...
100. fini

In the right "file explorer" pane, these files were listed in

*numerical* order. In the *left* "DVD content" pane, they were listed using a L-R alpha sort -- so, 100 appeared after 10 instead of after 99. IN THE SAME APPLICATION. Sort() is sort() is sort(), right? Obviously, there were two sorts in use and neither agreed with the other. Which is the *right* "sort" for you? Does your 3rd party library have a different notion?

I've seen file sizes reported in a Windows Explorer window and an IE window that used different notions of how to round. "Hmmm... is this 17KB file really the same as this *other*

18KB file that has the same name and timestamp?"

Will the GUI you use for the first part of your application use the same conventions as the GUI you use in some *other* part? Will you ever *notice* the discrepancies?

Sixth, we (psychologically) more readily acccept/embrace the

*requirements* that a particular hardware solution imposes. "It needs a 16b data bus". "It only works with NAND flash." etc. But, for software solutions, we think we can magically *kludge* something that will "adapt" what we have to what we *need* instead of *fixing* (rewriting) it: "we'll take the existing memory/buffer management software designed for a 64KB memory space and *extend* it to handle our 16MB memory space by creating lots of 64K *pools* (each managed with the old management software) and we'll just add some glue logic on top to keep track of which *pool* the buffer came from!" Would you use a *real* printf (OR ANY LIBRARY THAT RELIED ON THE PRESENCE OF SAME) in a small PIC deployment? You'd probably be miffed to discover the printf was being used *only* in: for (finger = 0; finger < 10; finger++) { printf("This is finger #%d\n", finger); }

Seventh, *who* assumes responsibility for testing (and fixing!) the third party software? How keen are *you* to do that for somebody else's code? Will there be any unspoken pressure on

*where* the fix gets made -- i.e., in the third party code or in an "adapter" that you develop in *your* code? When it comes to hardware, the folks involved in testing it are usually clearly identified. And, the range of solutions they have at their disposal is easily quantified: can the board be patched or redesigned? Or, is there something fundamentally flawed in the implementation that needs a complete rethink? (wanna bet that this results in major code rewrites when the problem is a software one... instead of abandoning the "component")

Both are examples of "reusable". It's just that it is typically easier to "fit" code from that existing solution to the "new" problem -- the problems are similar, the platforms (tend to be) similar, the design constraints similar, the personnel similar, etc.

You wouldn't, for example, want to take the memory allocator out of my "network speaker" and use it in a generic application. It's a bad fit. OTOH, a generic memory allocator would give abysmal runtime performance in my application.

IME, you have two fundamental "problems" that poke their head in your way when it comes to reuse:

- the guy (boss?) who thinks the problem can be greatly simplified by reusing existing code (while being clueless as to the actual details involved)

- the guy (gung-ho programmer?) who fails to see *any* similarity with existing solutions (NIH) and believes that only he/she can "save the day".

My approach is to reuse *designs* (which *could* result in lots of copy/paste from existing *codebases*) but fit them to the specific needs of the application. I have no desire to write yet another "sort" from scratch. I've long since forgotten the formal names for each of the various sorting techniques (bubble, shell, insert, etc.). *But*, I will know (remember) that some other project had data organized in a manner similar to "this one". And, I'll go see which sorting algorithm was used, there. And, tweek it to fit the needs of *this* application.

I.e., the "engineering" gets reused and I just have to do some "tidying up" to make it work right in this new use.

"Big companies" that can afford to "specialize" particular staff can benefit from this sort of approach by having "experts" in each "application sub-domain". E.g., an OS guy, a math guy, an I/O guy, a UI guy, etc. These people (resources) can accumulate knowledge as to the costs and benefits of the various approaches that they have used over the years (or, had to *maintain* on behalf of the company). So, they can be called upon to offer advice as to appropriate solutions (and the costs/rewards thereof) to staff making implementation decisions.

Reply to
Don Y
Loading thread data ...

I get your point.

But in either case, whether it be chips or software components, you do testing according to how reliable the source is, and how you want to use them compared to the vendor's testing. With chips, you usually just need to do board-level testing - make sure the chips are working together with everything else. But sometimes you do more direct testing

- it's not uncommon to include a ram test to check all the bits in a ram chip, for example. And I've used chips that we did more temperature testing on (ASICs from a small company), and for radio communication chips it's often useful to do extra testing if you are stretching them to their limits.

For software "components", you can be fairly sure you are using them in a different way than the vendor's testing - so you need more testing. Hopefully you don't need to go as deep as line-by-line testing, but that happens too.

Ultimately, if /you/ make a board or a program, it's /your/ job to make sure you test it appropriately, to a level that makes sense for the application.

Reply to
David Brown

You can add a ram-clear loop to a hook in the startup sequence. And you can also work around it by explicitly initialising data to 0. But my point is that CCS has a clear design flaw which is in direct contradiction to the C standards, and which would be trivial to fix. It is even documented in their manuals that they know this behaviour is non-standard.

The excuse, apparently, is that clearing bss can take so long on some devices (if you have enough data, of course) that you might get a watchdog timeout. And they don't want to disable the watchdog on startup. So instead of having standard behaviour as the default and offering an option to avoid clearing the bss, or doing something useful like adding a "noinitialise" pragma or attribute that you can use on big arrays, they decided to produce a broken non-standard compiler and let users find this "surprise" by trial and error.

To be fair, they don't claim to be compatible or conform to any particular C standard.

Reply to
David Brown

Oh, so it's not like they just shipped a crt0.s that was "buggy". Their actions are deliberate...

So? They can't clear the watchdog *in* the BSS_init() loop?

Sounds like they are going to a lot of trouble to "rationalize" a bad decision.

Ah, that's even better! (not)

Reply to
Don Y

You misspeak: that is one of many clear design flaws. 'double' is 32 bit, as well, which wreaks havoc if you happen to have your own reusable library code lying around that actually needs 'double' to conform to the ANSI standard.

--
Tim Wescott
Wescott Design Services
 Click to see the full signature
Reply to
Tim Wescott
[%X]

For my projects, the difference between a COTS product, supplied with just the basic documentation (descriptive specification, user guide and perhaps a trouble-shooting guide) and the in-house re-usable modules is vast.

Our in-house developed modules are available with full source code, original design documentation, test patterns and certification. Additionally, the review notes, change documentation and full version history will be a matter of record.

A sentiment with which I am in complete agreement.

Not always the case but a new project should review previous work to explore for anything that may be useful. Whatever is turned up then needs to be reviewed in the light of the requirements of the new project. It is much like choosing hardware components and you need as much of a data-sheet for software components as you do for the hardware ones.

Certainly not without a suitability review being conducted first. Some modules may be useful (with or without modification). It won't be simply ported that's for sure.

--
********************************************************************
Paul E. Bennett...............
 Click to see the full signature
Reply to
Paul E. Bennett

Exactly. And, for the inevitable "things that fall between the cracks", you can always chase down the folks involved with it (at inception or since) for clarification.

I don't want to paint the OSS (or "for purchase") folks with an overly broad brush but, take a *critical* look at the "product" they produce -- when you can do so *thoroughly*. Yeah, it

*might* work. But, how comfortable would you be betting your product (or your *company*!) on it "for the long haul"?

Three first-hand examples (of "many"?):

I was an early adopter of Jaluna (v1). It had many of the features I was looking for (for a project at that time). And, a *supposedly* good pedigree (Sun/Chorus).

Sure, you could get a system running by following the (detailed) steps provided. You could build a custom kernel. And, some toy apps. But, once you started poking around under the hood, the proverbial fan took a direct hit!

Why does every project think they have to invent their own, completely AS(s)ININE build system? Is there some **really**

**really** overwhelming reason why make(1) won't work for you? Especially with gobs of cheap disk space and CPU??

I.e., try to fudge the Jaluna build process and you will shoot yourself in *both* feet -- with a BAZOOKA!

And, since it is so odd, you never are 100% confident that it really is rebuilding all the correct dependencies. So, better safe than sorry and "make all" (*I* sure don't want to spend a day chasing down a bug only to discover some module is out-of-date!)

As for the documentation: man pages were missing or contained glaring errors. (OK, the fact that you want your man(1) pages to sit in HTML format is only mildly annoying...)

The code itself looked incomplete. E.g., /* XXX FIXME XXX */ And, comments were sparse -- if at all. I spent a day trying to sort out why their fdisk was misbehaving (because nowhere does it tell you everything that is going on in the process).

Similarly, I got on the Inferno bandwagon early on. Actually

*paid* for a commercial license before they open sourced it. Again, a great pedigree -- Bell Labs, Ritchie, Pike, etc. And, the basis for some *commercial* products!

Delightful system, conceptually (though I think there are areas that need to be rethought -- and, they've already made "about faces" in some regards).

Again, the documentation was abysmal. Looking through the code, you'd see things like:

my_function() { /* trust me ... */

followed by pages of uncommented code. Or, comments that made sense *solely* to the author but were obviously not intended for the benefit of anyone who followed...

I limped through the design of a system built on this. And, was totally disappointed with the resulting performance (to be fair, I *could* easily have thrown more horsepower at it... but, I shouldn't have *had* to! It would be like using a

3GHz PC *just* so you could write your chess program in Java whereas an old klunker could run the same *algorithms* comfortably in C/Lisp/etc.)

As a result, I took the concepts from that solution and re-applied them to a more conventional implementation. Better performance, easier to maintain, etc.

[Nota Bene: *design* reuse, clearly not *code* reuse (since Limbo isn't quite C]

This brings me to yesterday... :>

Re: my "cell phone/tablet camera" thread.

Since there are currently applications that can "photograph" a QR barcode on a mobile device, maybe someone has already

*publicly* solved this (or part of this) problem?

Take a walk down Roberto's basement while he's not home... "Heh heh heh... silly man! Did he think he could *hide* from me the secret of which books to touch to gain entry??!" Sure as heck, "ZXing" claims to do a good portion of what I want!

[Yikes! 64MB download just to photo-decode barcodes???]

Unpack the ZIP archive and, as to be expected, *no* documentation. "Um, excuse me, can someone please point me to 'main()'? And, if it's not *too* much trouble, can you give me a brief rundown on the algorithms that are used so I'm not just looking at a bunch of number crunching without any context...?"

This *despite* the fact that it is "included in some Nokia (?) phones".

This suggests it works (at least "somewhat"). And, also makes you wonder what that vendor's standards are for software quality! (maybe they have their own internal version that is better documented?)

So, I have to ask myself, how much time do I want to spend sorting through other people's "problems" and what's the potential *reward* for that investment? Will I sink a lot of time into something only to discover that it was a "hobbyist" effort?

"Hmmm... but Nokia (?) did, so maybe its worth the effort?"

Then I remember: Chorus/Sun... Bell Labs/Pike/Ritchie... will I be better served coming up with a solution that fits *my* needs (does ZXing work if the camera is in motion? does it expect the user to preview the images? does it eat gobs of CPU as it is written in Java?) instead of belatedly discovering that it *doesn't*?

[please note that I am not trying to disparage any of the "products" mentioned, here. I was obviously drawn to them because they *are* attractive. I wish *all* of them success! But, I also have to worry about getting product out the door... *reliable* product!]

Yet, despite however pedantic and thorough you *hope* to be, it is always *embarassing* what sorts of details you will overlook. Assumptions that you don't even think about because they are so *basic*.

E.g., I always use simple problems to explain how computers work to "lay folk". I.e., how they simply (rigidly!) follow directions (but very quickly :> ). I often use changing a car tire as an example -- because it is easily identified with and not complex in nature.

I start out by describing the steps like:

- remove hub cap/wheel cover

- loosen lug nuts

- jack up car

- remove lug nuts

- remove wheel ... (install new wheel)

Everyone will agree with this.

Until I ask, "How did you do that from the driver's seat?" (i.e., you never exited the vehicle).

So, you prepend "exit the car" to the list (in detail: open door, step out, close door) -- at which time I add, "and get run over by a passing 18-wheeler!" (because you forgot to check to see if it was *clear* to exit the vehicle -- that's called a *bug*! :> )

Someone interested in the discussion will then start thinking about the problem in finer detail. And, eventually, they will realize just how many "little details" are involved in this "simple" activity. Details that you don't even consciously acknowledge when performing the task. And, would *easily* fail to mention to "a visitor from another planet" looking for information on how to do this.

The same is true with almost everything we do. "What have I implicitly *assumed* here? *That* is what's going to eventually bite me!"

Exactly. The firm at which we employed "Standard Product" (software) was simply extending their *hardware* component base (subsystems) into the software realm. "Hey, if we can have a 3 axis motor control drive, we can also have a 3 axis *servo* controller software package! And, the two need not necessarily be tied to each other!"

In small "brain trusts", you can do this sort of review informally. People remember what particular projects used and their drawbacks so could recommend or rule them out while chatting "at the water cooler". Then, the surviving candidates could be researched in greater depth.

Reply to
Don Y

It's true that C requires 64-bit doubles (or, technically, it requires at least 10 digits of precision - which you can't get in 32 bits). However, while it would be nice to have support for full doubles, having

32-bit "doubles" is very common on smaller embedded targets and should not come as such a big surprise.

But I didn't mean to imply that the bss issue was CCS's /only/ design flaw. Amongst others are that the current version is based on a heavily modified ancient version of Eclipse, rather than as plugins for modern versions. That means you miss out on the last 5 years or so progress in Eclipse (and there's been a lot), and that it is stuck on Windows only. But I gather that the next major version will have a more current Eclipse - at least in this case the developers know they can improve the situation. What bugs me most about the bss issue is that they know about it, yet refuse to do anything about it.

Reply to
David Brown

I hate the term Intellectual Property, (It is not property, although ip may be a legal term by now.) but anyway.

If something is ip, then you can found out what it did by two means:

- inspecting the source

- reverse engineering

Because IF IT IS INTELLECTUAL PROPERTY THAT DOESN'T MEAN IT IS CLOSED SOURCE! The whole Open Source movement and the whole Free Software movement is build on the notion that we can control open source ip.

Regards inspecting the source and reverse engineering, both activaties are done one a wide scale. It is very hard to prove those victimless crimes, as the poor dissected chip is not a legal party to file a complaint. If you come in the open with the information, that is a different matter. But you may disassemble or single step XXXXXX.DLL and decide that it is a piece of garbage that you won't have in your project, and nobody would be the wiser.

--
Groetjes Albert
Reply to
Albert van der Horst

(IANAL, and rules may vary from country to country.)

Reverse engineering and other inspection is not a crime. It might be against a EULA or other license or contract, which makes it illegal but not a crime. Like copyright infraction, which is not a crime (and certainly not "piracy"), you can be sued for economic or other loses by the injured party, but it is not a crime (meaning you are prosecuted by the state, and can be jailed) unless you are economically motivated and working on a reasonably large scale. (There are other exceptions where your activities are a crime if you live in the land of Micky Mouse laws.)

If you are doing the reverse engineering for the purposes of compatibility or interaction, then it is in fact legal regardless of what the EULA says - most EULAs contain clauses that are not legally enforceable.

Reply to
David Brown

Nor am I -- thankfully! :>

Also, the "injured party" has to have demonstrated an active and consistent defense of their IP. I.e., if lots of folks are doing this and they only come after

*you*, then you have a stronger defense. Conversely, if they religiously go after *everyone* known or suspected of this type of "violation", you have a harder time defending yourself.

This, IMO, is one of the biggest arguments *against* copyright, patent, etc. "protection" -- the burden falls on the IP holder to defend it.

Reply to
Don Y

No, that's not correct - it depends on the type of "IP". If you have a trademark and you don't defend it, you lose the rights to it. But if you have the copyright to something, you keep those rights regardless of how much you fight for it or not. Success in past court cases will often make it easier to win new cases (or to persuade people to settle out of court), but you don't have to litigate or otherwise fight for your rights if you don't want to. The same applies to patents.

The biggest problems are exactly the opposite, especially with patents. The patent holder will sue someone, who then has to pay to defend themselves in court - and if the defendant believes the patent is invalid, the burden of proof is on them to prove it invalid. This means that in many cases, especially with software patents, the innocent defendant in patent cases has to pay high legal costs - it is cheaper for them to pay the protection money.

Reply to
David Brown

IANAL either ...

... but unfortunately in the USA, some reverse engineering IS a crime. The DMCA (Digital Millenium Copyright Act) forbids most use cases of reverse engineering encryption/decryption, anti-copying and digital rights management schemes.

DMCA contradicts itself, patent law and consumer protection law ... and few aspects of it have been challenged in court despite the law being 15 years old. It has been amended a couple of times, but the result has been that the law has only gotten murkier.

Moreover ... and this just happened ... a new bill in Congress is trying to make copyright infringement a punishable crime as opposed to the civil offense it now is. This is very worrisome because it interacts with the more nasty twists of DMCA which make it unclear when, if ever, fair use applies.

George

Reply to
George Neuner

You're missing the point. No one is going to hunt down possible infringers on your behalf. It's a civil issue (in the US). So,

*you* have to identify all potential infringers and bring suit against them. And, in the process, put *your* patent's validity on the line (the "accused" can move that your patent is invalid and, if he wins that point, you *lose* that patent protection).

You also have to spend time/money to investigate (reverse engineer) the means by which the *suspected* infringement is taking place. I.e., unless your explicit independant claims are infringed, you have no grounds for your case. (I can do the same thing that your patent claims to have invented but do so *differently* and I'm not infringing; the patent examiner's job is to try to get your patent application to be as narrowly focused as possible)

I see no real value to modern day patents -- especially with terms like 20+ years.

Note, also, that public disclosure of a patentable idea turns it into prior art. Making subsequent patentability a moot point (though in some jurisdictions you may have up to a year to capitalize on this)

IMO, the GPL is just as bad as patents. The idea should be to encourage people to *innovate*. Forcing people to share their innovations doesn't really do that -- any more than giving people exclusive use of their innovations! E.g., I use a Berkeley style license on anything that I release...

Reply to
Don Y

My point here is that you /don't/ have to hunt down possible infringers. It is only with trademarks that you /have/ to pursue people, and even then it is only if they are "visible". If you make a wheelbarrow, called it an "ipod", and sell a hundred per year, then Apple can and will ignore you, with no harm to their trademark. But if you integrate a music system and sell ten thousand, then Apple /must/ sue you for the protection of their trademark.

For other types of "IP", there is /no/ requirement to pursue infringers. It's up to you - if you think it is worth the time and cost, then go for it. If not, then you can send them salesmen or lawyer's letters or just ignore them.

It is correct that you will have to invest money if you want to take the infringer to court - and I agree that in an ideal system, only the guilty party would ever have to pay. But for such cases, either party could be guilty - patent trolls are as common as patent infringers.

A better system would be to have a cheaper, faster, more technical and less legal arbitration system to have technically competent independent experts decide on copyright and patent disputes. Most issues could be cleared up quickly and at minimal costs (except, of course, for any damages paid by the guilty party), and either side could always appeal to normal civil courts if they disagreed with the judgement.

It's worth distinguishing a bit between different types of IP - trademark cases are usually easy to see one way or the other, because things are out in the open. Copyright abuse is often harder to prove one way or the other. But you seem to be talking mainly about patents, which are the most controversial of the three. I agree that patents are a bigger problem, so we can concentrate on them.

I've heard that there is no point in taking out a patent unless your invention is worth $10m - otherwise it's not worth the money suing people.

There is a vast amount that is wrong with the patent system we have today, and it is the total antithesis of the original aim of patents. This is most obvious in the world of "software patents" which some countries have, but applies more generally.

However, I don't agree with your feelings that it is unfair on the patent holder, who must invest so much to pursue an infringer. It is /equally/ unfair on those who are sued by patent holders when they have not infringed on the patent, or when the patent should be invalidated. It is a terrible system for everyone, except the lawyers.

The GPL is an excellent (though not perfect) choice of licence for many types of software - but it is not appropriate for all types. Very roughly speaking, it is a good choice for software that people will use as programs, but a poor choice for software that people will use as part of their own programs (then something like a BSD or MPL style is normally better). The GPL forces the software to remain free - sometimes the "free" part of that phrase is the best part, sometimes the "forces" part is the worst part.

So the GPL is a good choice for gcc - there is no reason why anyone would want to take the source code of gcc and hide it away unless they wanted to sell some or all of it as their own proprietary code. The GPL provides legal protection against it. But on the other hand, the GPL is a poor choice for the gnu readline library - free software fanatics choose the GPL for readline as a way to try to force people to use GPL for their software. The result is that lots of programs that could benefit from using the library, cannot use it because of the license - and that there are several "competing" libraries around with different developer-friendly licenses (mostly BSD-style).

When the GPL restricts /your/ freedom to choose a license for /your/ code, then it is non-free and a bad choice. But when you /want/ your code to be free, then it is a good choice to help keep it free.

Reply to
David Brown

Perhaps my reference to Micky Mouse laws was too subtle, but that's what I meant.

Apparently, the DMCA /does/ allow for reverse engineering for compatibility purposes, and for security issues, amongst its various exceptions.

The DMCA contradicts most things, especially common sense and the basic human perception of fairness.

Remember, courts and laws are a /legal/ system, not a /justice/ system - and the law has always been heavily biased towards who has the most money. (This is not just a criticism of the USA, though it is perhaps more obvious there than in most countries - it extends back at least as far as Roman times.)

What I always find difficult to understand is how this sort of thing can happen in a democracy. You are supposed to be governed by the people, for the people. It is unlikely that you will find a single living person in the USA above the age of 5 who has never infringed on copyright - with the huge majority doing so regularly and knowingly. So this law would turn the entire population into criminals. Have you ever taped a program off the tele, and watched it more than once? Or kept the recording for more than 30 days? Go directly to jail, do not pass go and do not collect $200.

Reply to
David Brown

Frivolous software patents are essentially a self correcting problem. They have a limited life and most of them have already died.

The whole issue of IP protection is to create an incentive for people and companies to risk a disproportionate amount of resources to the potential individual gain in the development of new ideas. A large percentage of these developments never produce enough revenue to pay development costs. Something is needed to to protect the ideas that are truly unique and useful. Real technical innovation has slowed down a lot in the last decade.

Regards

walter..

-- Walter Banks Byte Craft Limited

formatting link

Reply to
Walter Banks

If only that were true!

There are huge numbers of software patents around, and /very/ few of them would ever have been granted if the rules were followed - to get a patent on something, it must be an /invention/ that is /new/, /non-obvious/, and /useful/. Many people feel that a purely software solution is not an "invention" - thus all pure software patents are invalid and frivolous. Even if you allow that a software algorithm is an "invention", the majority of granted patents are not new, or are at least very similar to existing software or methods, and a very large proportion are obvious to experts in the field.

And even in cases where the patent is clearly frivolous beyond any reasonable doubt, if the owner sues an "infringer", it can still be cheaper for the victim to settle out of court. And the victim, being (typically) bound by law to put their shareholders profits above their own sense of morality and justice, will do exactly that. Then when the patent owner attacks the next "infringer", they can point to the previous settlement as extra evidence. So frivolous patents are a self-reinforcing problem, not self-correcting.

Case in point - if what I've read is correct, Microsoft makes more profit from Android phones than from Windows 7 phones, because manufacturers like HTC must pay them royalties for the use of patents that are "infringed" by Android (and the Linux kernel in particular). They have not revealed exactly which patents are covered, but they are almost certainly mostly frivolous (MS was awarded a patent on the "PgUp" and "PgDn" keys three years ago, as an example).

Oh, I agree on the principle behind the original aim of patents - it's a good idea to have a system that rewards innovation, inventiveness and people willing to take risks for a good idea. It was designed to allow small inventors to have a good idea and let big manufacturers mass-produce the inventions for the public good, while letting the inventor get a solid share of the profits.

The trouble is that the modern patent system, especially in more modern areas like software, simply does not do that. It fails in almost every conceivable way, and is a very big reason why technical innovation has slowed down. When technical companies spend more on lawyers than on engineers, the system is unrecoverably broken.

mvh.,

David

Reply to
David Brown

I guess it depends on what you consider "frivolous".

Part of the problem (IMO) is that engineers (except those who seem to delight in listing patents on their CV's) look at most patents and consider them "obvious" -- which contradicts the idea of "innovative" (in our minds).

So, to an engineer, most patents are "a joke".

However, that doesn't stop a patent from being *granted* Hint: patent examiners are civil servants and this is just a *job* to them. they may not even have any real technical proficiency in the areas for which they are critiquing the patent applications! (and, from speaking with one, there is apparently a lot of nonsense "mechanism" involved in the process that rewards them for "shuffling paper" instead of "doing the right job")

While that seems irrational on the face of it, note that those developments must obviously add value to the products to

*justify* those costs else they simply wouldn't be undertaken. You don't build a *different* mousetrap unless it will also be a *better* mousetrap and, presumably, generate more revenue, recognition, etc. for you!
+42

The problem is the profit-and-loss side of the equation. It's easy to encourage innovation. Get 6 "motivated" engineers in a room and have them see who can one-up the others. It won't take long for a "better" (though perhaps not "best") idea to surface. And, they'll probably *enjoy* the "competition"!

But, in the business world, "motivate" is spelled with two $'s! Which means there has to be some reckoning at the end of the day to square up accounts. Businesses tend to be greedy and self-serving (that's not a criticism, just a statement of fact). They aren't inclined to discard a financial/business advantage if they don't have to!

(can someone explain why CompuServe so stodgily held onto it's "ownership" of the GIF format? Did they even have any real corporate existence anymore?? Or, was this their "sole asset"?)

Businesses expect to be "reimbursed" for their innovations in terms of *profits*. :>

Coming up with a practical scheme that addresses innovation, protection and dissemination is what patents were *supposed* to do. Unfortunately, times have changed so much in the decades since their inception that the time scales just don't make sense! They now *hinder* (IMO) more than

*help*.
Reply to
Don Y

Well, I happen to be related to two IP attorneys and count several others as friends ... I considered going into IP law myself at one point. IP attorneys are /required/ to have a science or engineering background so they tend to be much more reasonable about things.

Every one of them considers the DMCA a crock of $%^& and would love to have a multi-million contra case to try.

Your first mistake is thinking you live in a democracy ... you don't (no matter where you live). I can't think of a single nation that actually is a democracy.

There are republics, federated republics, constitutional monarchies, actual monarchies, dictatorships, theocracies, and some of the apparent dictatorships might really be oligarchies (it's hard to tell the difference sometimes) ... but AFAICS, no democracies. Some countries may embrace classic democratic elements such as public elections, but that doesn't make them democracies.

In a democracy every voice is equal and every issue that affects the public is put to collective vote. Any form of government that involves representation rather than direct public involvement is not a democracy.

The fact that the U.S. is NOT a democracy is enshrined in the Pledge of Allegience, which reads: "... and to the Republic, for which it stands ...".

The U.S. is, in fact, an example of a federated republic. For decades now, schools have been misteaching and misleading students into believing they live in a democracy. I normally don't subscribe to conspiracy theory, but in this matter I have come to believe that this particular misteaching is by design ... the misconception is far too widespread to be the result of students misunderstanding their civics lessons. [YMMV and I really don't care to debate it ... at least not here.]

Actually, recording television and radio for the purpose of "time-shifting" is expressly PERMITTED by law regardless of the idiotic warnings you hear during some broadcasts. That was settled by the US Supreme Court back in the 1970s and subsequently was written into copyright law. I don't have the cite handy, but time-shifting falls under fair use and DMCA did change existing law regarding it.

As you mentioned above, you must destroy a "time-shift" recording within 30 days and you aren't permitted to profit in any way from it. You can, however, watch it as many times as you wish within the time limit.

George

Reply to
George Neuner

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.