OT 0805 resistor noise

Couldn't they be dipped in lead-free solder?

robert

Reply to
Robert Latest
Loading thread data ...

That's how most modern software works, too.

Reply to
Walter Harley

Except that huge, complex, synchronously-clocked hardware logic systems seldom have bugs. But most software has bugs. The design methodologies are distinct: in hardware logic, the current system state is unambiguous, and combinational logic based on the current state sets up the next state, which is implemented, across the entire chip, at the next clock. People get into trouble whan they break that paradigm. In software, the execution point is local, but wanders all over the place in often unpredictable paths, and in a multitasking system (ie, most systems nowadays) there is seldom any decent mechanism for coordinating tasks.

So, most hardware works and most software sucks. The best programmers are usually engineers who implement everything as true state machines. The worst programmers are kids who went directly into CS and learned C++ and MFC from the get-go.

John

Reply to
John Larkin

Interesting observation.

But are you saying that the transitions in hardware logic systems are conceived and described as system-wide (what the software world would call "global variables")? Surely that's not true in a large system; as you said earlier, things are broken into functional blocks, "Multipliers, adders, fifos, PID controllers, ..."

Seems to me that as soon as you have a system consisting of smaller components that themselves embody some complexity, but that are described in a simplified form, the opportunity for bugs arise. For instance, one might thing of a block as an "adder" but in fact it can overflow and wrap, a condition that would only be obvious in hindsight.

Similarly I would think that the same opportunities for multithreading bugs that exist in software, also exist in hardware. (Put it this way: I'm sure I could write a threading bug in hardware if I tried.)

Perhaps the differences have more to do with how concisely small functional blocks can be fully described (in software, even something as basic as a scrolling listbox is hard to fully specify concisely enough to be useful to a programmer), and with the number of layers of functional composition (>10 for even a simple desktop app, versus I imagine many fewer for a hardware logic system)?

I don't design hardware logic so I'm just guessing. (I do design software, so I'm all too familiar with the processes that lead to bugs on that side.)

Reply to
Walter Harley

Well, I agree with most of the points you made. But on this particular point, I have to say I think there is a considerable difference in complexity, on any metric I can think of, between something like Windows and something like a Pentium. The feature set of Windows is larger; the number of differently-purposed functional components is larger. A P4 Prescott has more transistors than Windows has lines of code, but you have to consider that most of those transistors are just vast arrays of identical storage - a comparison between Windows' size in memory (if all of it were loaded simultaneously) versus the Pentium's number of transistors might be more fair, and there Windows is considerably (and unfortunately) larger than the Pentium. So I think that holding them to the same standard of number of bugs is not quite fair; it might be more fair to hold them to "number of bugs per bit". In fact, that's sort of what I'm wondering: on a standard like that, would software still lose?

And as soon as we get away from a single chip and into a system, like for instance a motherboard, we start seeing the bug rate escalate. There are plenty of problems with motherboards - incompatibilities with hardware, timing problems, power supply problems. I'm not aware of ever having encountered a CPU bug, but I have certainly torn my hair out over motherboard bugs.

Reply to
Walter Harley

I reckon its down to the ease of upgradeability. its *easy* to change a piece of software on a PC. type away for a few minutes and hey presto, its changed. IMO this has led to a dreadful culture of coders whereby they dont design, they just code, and are almost deliberately sloppy, because hey, its ISP right.

its much harder to change analogue electronics, so its more important to get it right first time. or at least close enough that {insert name of best tech here} can easily rework it. So designers think about transient states, startup, shutdown, fault conditions, emissions, immunity, temperature, humidity, PCB flexure etc *before* building anything.

oops, I think I just said many coders are lazy and poorly schooled. that would explain why so much code is crap. Revenge of the bell curve. I'm sure it has nothing whatsoever to do with the fact that its a lot cheaper to train a student to play with a PC than to be an engineer, but they pay similar fees...

programmable logic does indeed allow for ratshit programming. I think the trick is to not hire ratshit programmers.

look at the number of screw-ups intel makes, versus micro$oft. The hardware guys are winning hands down.

Cheers Terry

Reply to
Terry Given

Most of the bugs in a complex microprocessor turn out to be in the microcode, which is programming.

John

Reply to
John Larkin

Non-user accessable words of 1's and 0's, stored in a distinct addressable memory, that sequence ALU and control hardware to implement a higher level, visible instruction set, where one or more sequential microinstructions actually execute a user-level instruction.

Something like that. Writing microcode is programming. Designing a RISC machine is not.

I agree that non-microcoded processors rarely have microcode bugs. But then, they rarely have bugs.

John

Reply to
John Larkin

1) Not all complex microprocessors are heavily microcoded[*]. 2) Most bugs in , at least, aren't in microcode. 3) Some bugs in are fixed in microcode. 4) Most bugs are in unexpected corner cases and exceptions. [*] What is your definition of microcode? ;-)
--
  Keith
Reply to
Keith Williams

Ok. I like that definition, but it doesn't include horizontal microcode, which certainly is microcode. and...

What about the engineer who writes 1's and 0's that are then compiled into gates?

LOL! There are processors that fall inbetween purely microcoded and

100% hardwired. Even those that are 100% hardwired, have bugs. I've worked on both and there are a bunch of us who still have jobs hunting such bugs. ;-)
--
  Keith
Reply to
Keith Williams

In the old 6502, a popular copy protection technique was to use something like LDA ($3FF),X, which was supposed to load the accumulator from a pointer in

400:3FF offset but the X register... but which really used 300:3FF for the pointer !

Not at all surprising when you think about it, but it definitely qualifies as one of those "corner case" bugs.

Reply to
Joel Kolstad

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.