Re: Intel details future Larrabee graphics chip

There is no such thing as bug free HDL. The bug density is just usually lower especially in ASICs. The main reason for that is more thorough testing of the code, because respins of the chips is slow and expensive. In FPGAs when you can always do an update the bug densities are much higher in the beginning.

--Kim

Reply to
Kim Enkovaara
Loading thread data ...

At least in the high-end of the embedded systems processor updates and model changes are quite frequent. The lifetimes of processors and their peripherials (especially DRAM memories) is becoming shorter all the time. The code has to be portable and easily adaptable to different platforms.

C is very portable in embedded systems as far as I have seen. Some very minimal processors have weird compilers, but the bigger processors usually have gcc support, and also the commercial compilers support the C same way as gcc.

High-end embedded systems can easily contain 10Mloc of code, and that amount is needed to support all the required features.

--Kim

Reply to
Kim Enkovaara

And at the very low end, changes to completey different processors are also very common. If someone comes up with a micro that costs

8.4 cents and replaces a part that costs 8.5 cents, that's a saving of $16,800 per week at a production rate of 100,000 units per hour. After a while you get the attitude of "ho hum, another assembly language instruction set."

As for "asking for bugs", I find that working with masked rom parts with a big setup fee and a minimum order of 10,000 parts clarifies the mind quite nicely.

--
Guy Macon
Reply to
Guy Macon
+--------------- | As for "asking for bugs", I find that working with masked rom | parts with a big setup fee and a minimum order of 10,000 parts | clarifies the mind quite nicely. +---------------

Yup! In the early days of DCA[1], the EEPROMs we used in our remote concentrators were still *very* expensive, and there were several times where if we'd had to send out replacements to all the units in the field[2] we simply wouldn't have been able to make the next payroll!! ;-} So we *had* to make very, very sure that we didn't have any bugs. And, yes, we were writing exclusively in PDP-8 & Z-80 assembler.[3]

-Rob

[1] Digital Communications Associates in Atlanta, not the ".gov" one. [2] You have to send out the new ones before anyone will stop using the production equipment long enough to send you back the old ones for re-programming! [3] Well, we actually wrote BLISS-like pseudo-code (which got left in the comments) and then "hand-compiled" it to assembler. But both the BLISS pseudo-code and the assembler got line-by-line reviews by multiple people. Expensive, but quite reliable.

----- Rob Warnock

627 26th Avenue San Mateo, CA 94403 (650)572-2607
Reply to
Rob Warnock

Assembly? You don't program musical greeting cards and blinking sneakers in Python?

John

Reply to
John Larkin

Never? Every FPGA on the planet has bugs?

For some reason, our shipped products rarely have bugs in FPGAs or in embedded code. But development-time bugs are far fewer in FPGAs as compared to uP code. One recent product has about 7K lines of VHDL and

7K lines of assembly, and I'd estimate the initial-coding bug level to be 10 or maybe 20:1 in favor of the VHDL.

Most of the test instruments I've bought lately seem to have superb hardware and flakey firmware.

I've got a small software project going now to write a new material control/BOM/parts database system. The way I've found to keep the bugs under reasonable control is to use one programmer, one programmer supervisor, and four testers.

John

Reply to
John Larkin

On a sunny day (Mon, 11 Aug 2008 07:45:08 -0700) it happened John Larkin wrote in :

postgreSQL with phpPgAdmin as frontend here, web based. What bugs? Yes the bugs in my SQL :-)

Reply to
Jan Panteltje

A single database file, of fixed-length records, programmed in PowerBasic, direct linear search to look things up. Fast as hell and bulletproof.

We've been using my old DOS version for about 10 years, and never corrupted or lost anything. The new windows-y version is a lot spiffier, but has the same architecture.

I may decide to sell it. It's designed specifically to manage materials and BOMs for a small electronics manufacturer. We looked at some commercial packages, Parts&Vendors and such, and decided to upgrade our old DOS thing instead. Again, simplicity has its virtues.

John

Reply to
John Larkin

On a sunny day (Mon, 11 Aug 2008 10:05:01 -0700) it happened John Larkin wrote in :

I agree that for one specific case it is faster and simpler. But using PostgreSQL as database, not only is it free, it has a huge user base, so any questions I asked in the relevant newsgroups were immediately answered. phpPgAdmin is cool, it practically constructs the database for you, all you have to do is specify what fields, and type of fields, you want. I use it for everything, can access it over the web from everywhere, even for my DVD collection. You generate some 'views', ones you often need, and one mouse click shows what you looked for. Also in backup, it takes only a few minutes to backup the whole postgress database, so company administration, projects, other stuff.. just from the command line. I mean you only backup the data, very compact.

The only programming involved is the SQL to generate the views, or to generate a search. Yes, yet an other language to learn, but universal. And total cost: zero, open source.

formatting link
formatting link

One advice, if you use phppgadmin, put it in a non standard directory on the web server, there have been many illegal attempts to access it in its standard directories on my machine. If you put it in: /usr/local/httpd/htdocs/if_you_guess_this_then_play_the_lottery_more_often/index.php then they likely won't find it. Now I am really curious if the suckers will try that directory :-)

DOS = Denial Of Service ??? LOL. Even abbreviations change meaning over time.

Reply to
Jan Panteltje

Actually, I use the BEST programming language.

BEST is a programming language that I developed to answer the frequently asked question "Which programming language is best?" once and for all.

BEST is a Befunge-93[2] pseudocompiler written in Malborge[3][6] with library calls to routines written in Microsoft[4] Visual BogusFORTH+++[7] (!Kung edition)[9] that invoke various trinary functions written in[5] Reverse Polish Whitespace (for clarity).[1]

References: [1]

formatting link
[2]
formatting link
[3]
formatting link
[4]
formatting link
[5] If I gave you the URL for [5] I would have to kill you. [7]
formatting link
[6] ftp://ftp.freebsd.org/pub/FreeBSD/ports/distfiles/malbolge.tar.gz [^]
formatting link
[7]
formatting link
[Q]
formatting link
[9]
formatting link
[7]
formatting link

--
Guy Macon  Guy Macon 
Guy Macon  Guy Macon 
Guy Macon  Guy Macon 
Guy Macon  Guy Macon
Reply to
Guy Macon

If the chip is nontrivial I would say that there is something hiding inside the chip. Coders are not perfect and neither are the tests. Also the tool flows have bugs, for example I have debugged many synthesizer bugs in the past. Asynchronous interfaces are also good place for finding bugs that might show up only when the manufacturer updates FPGA process etc.

I have seen bugs found in chips that have been in real use for

10+ years, just some weird condition causes a bug to appear from thin air.

One explanation is that in HDL coding it is normal to do module and top level testing of the code in simulator. For software module testing is not normal practice and also the tools for that are quite recent.

7k lines of VHDL is quite small design. I usually work with 10-100x bigger VHDL/Verilog codebases.

In more complex platforms HW is ~10% of the effort so it is obvious that the bigger part of the development might have more bugs also. Software is the critical path, HW side usually has time to do more testing and has more labtime to sort out bugs.

--Kim

Reply to
Kim Enkovaara

Whilst not disputing that hardware folks probably come to the table with a rather different attitude to bugs...

Given a problem to be solved by a combination of hardware and software, the hardware is usually used *merely* (ducks for cover) to provide a set of primitives which the software will then stick together in a more complex way. The software would then be solving a more complex or less well specified problem and might reasonably be expected to have more of the bugs.

A second effect is that any deficiencies in the chosen set of primitives is likely to lead to bugs that get blamed on software rather than the design. Even if the correct attribution is made, software usually *is* easier to change (and later in the development schedule) so managers decide to "fix it in software".

A third effect is that *application* software has so many dependencies on code written by people you have never met and who have no interest in your welfare. If we are to compare bug rates in hard/firm/soft-ware then we'd need to direct our gaze at things like interrupt handlers and the lowest levels of device drivers.

A fourth effect is that customer expectations are different (and software often *can* be changed after purchase) and so the commercially sensible decision is to be first to market and accept a non-zero level of bugs.

Reply to
Ken Hagan

You will just love Unlambda then...

formatting link
;-)

Simplified, Turing complete but devlishly hard to do anything in.

Regards, Martin Brown

** Posted from
formatting link
**
Reply to
Martin Brown

Programmers tend to whip out a lot of code fast, get it to compile by trial-and-error, then run it and look for bugs; or preferably, let somebody else look for the bugs. Most C code is virtually unreadable, uncommented and undocumented, or documented wrong. As a design methodology, that's crap.

I'd fire any hardware designer who worked this way. Any programmer, too.

John

Reply to
John Larkin

On a sunny day (Tue, 12 Aug 2008 09:23:30 -0700) it happened John Larkin wrote in :

When I had the TV shop, some guy told me: Go look at the competition, see how they do it. And my reply was, and still is: Let the competition look at me :-)

So, HOW do you think you will learn to program better by reading about how it is done wrong? You only will learn to program better by looking at great code, and: learning from those coders. This is why open source is so incredibly important. Instead of every new programmer starting to make all mistakes all over again in writing some application, you can build and improve on previous work.

This is why Linux takes that incredible flight, thousands and thousands of individuals contributing, like one big brain.

Now put that against those few brain washed newby ex-student programmers working in and for Redmond.

You know, I am grateful if somebody takes the trouble of pointing out a flaw in my code. Sure there is that moment of 'aaarg, what was this' (old code maybe years old, you need to look it up and regain the understanding of the whole flow), but it (the code) lives on, it will outlive you, it will teach new people, they will further improve it.

What a difference with the sad mouse-clickers in Redmond, who will be obsolete with the next feature limited version of their OS.

R
Reply to
Jan Panteltje

VHDL is a strongly typed language similar in constructs to Ada. Any of the Pascal or Algol derived languages would have roughly the same characteristics. It is much much harder to get random code to compile - you have to make your intentions clear or the compiler just spits out error messages.

It might be an interesting academic study to see how the error rates of hardware engineers using VHDL compare with those using Verilog tools for the same sorts of design. The latter I believe is less strongly typed.

Almost invariably. Software is the last step in the chain. If anything gets squeezed in the schedule it is always the software that loses out. We once fixed a bus timing design error by using an indexed form of addressing because the extra cycle delay hid the hardware fault.

At last something that we can agree on. Too much software is not properly designed. Modern compilers these days have a fair amount of static testing built in, but nothing like enough to solve the problem of below average coders and poor or no specifications or design documents.

It will draw fire from the C is always wonderful crowd but I expect you can live with that. And it isn't strictly the C language that is at fault. There is plenty of good well documented software written in C.

And there are languages from the Pascal, Algol and Ada families that are intrinsically better suited for robust engineering development. But the market chose C/C++. Ada is a bit too top heavy for my taste.

Donald Knuth chose to write TeX in a subset of Pascal integrated within the documentation in a language he called WEB for exactly this reason and it is virtually bug free.

formatting link

He is an exceptional software guru though.

Part of the problem is that in civil, mechanical or even electronic engineering you can draw a theoretical abstract diagram entirely different to the final implementation that the average lay person can make a stab at understanding. If you try this for software most people just switch off. Tools that show a walkthrough of proposed screens go some way to improving things a bit.

There are notations that will give lay readers an overview of dataflows and dependencies so that the interested party can understand what is planned. All too often they fail to read these documents so the faults live on until final test.

Code reuse from the paste buffer is a particularly bad problem and creates spectacular maintenence traps. It is really terrible in environments where productivity is measured as KLOC/month.

Agreed. I have lost count of the number of places that have looked awfully hurt when I ran static testing software against their production codebase and found lots of latent bugs, unused variables and unreachable dead wood code. I reckon some C programmers take it as a challenge to add random casts to parameters until the code compiles.

Regards, Martin Brown

** Posted from
formatting link
**
Reply to
Martin Brown

is

As I've said, my programs work fine. I have no immediate need to "learn to program better", any more than I need to learn to solder better. But I am fascinated by technology disasters of all kinds, and modern "computer science" has given us lots of juicy examples.

And studying failure is a valuable technique for avoiding same.

I recently evaluated three general-purpose drawing programs, one public-domain and two commercial. All three were horrors, undrivable or buggy as sin. Open Office Draw looks OK, if you don't mind the 127 Mbyte download.

John

Reply to
John Larkin

On a sunny day (Tue, 12 Aug 2008 10:24:55 -0700) it happened John Larkin wrote in :

is

Sure.

be careful, one thing that sort of did hit me was this posting:

formatting link

(I have folded the URL as the server complained last time it was too long).

the poor guy had a defective memory module in his PC, he found all programs sucked, all his work for nothing, anyways after somebody pointed him to a memory test program, and he finally replaced the defective memory, and then I advised him to re-install his stuff, which he did not immediately do.. anyways the problems then later disappeared.

Many modern programs (like drawing programs, you are not specific so what do I know) have complex interfaces, and need considerable time to get used to, and you need time to even learn where to find the functions, etc. In the same way, one who is used to one make video-editor, may flip out completely on the user interface of one made by some other company.

For some of those softwares you may need MONTH of training before you can even claim to be a user.

A simple example: Blender

formatting link
, have it on my system for well, more then 8 years? from its beginning, it can do amazing things, even made leaders for VHS productions with it, it gets more complex all the time, and learning to use it takes more then a few days, not even counting your artistic capabilities. So I am not buying : 'That software sucks'. If it crashes, OK that is bad.

Anyways, you jump from one issue to the other, I'd say: relax, maybe the sun even shines over there, here it is rainy, do not get all purked up by soft you do not like. In the very beginning of Linux (I started with SLS Linux (Where SLS stood for 'Soft Landing Systems') ), if I needed an application, then often it simply was not there, or was not the way I wanted, so wrote it myself. I am sure deep in your heart you _do_ like programming, I have seen your enthusiasm for your jump tables in asm, so if must be, start writing your own drawing program. I mentioned Blender, that is how Blender started, those guys at NaN (Not A Number) wanted something better, they acted. Else it all is just blah blah and does not help the world, a world of complainers is no good to anybody.

Reply to
Jan Panteltje

is

By learning about mistakes that happened and about their consequences. I learn a lot when I disect a design tossed over my fence with the request to find a solution and with comments such as "we can't reliably produce this in quantities" or "we have this, that and the other problem with it in the field".

[...]
--
Regards, Joerg

http://www.analogconsultants.com/

"gmail" domain blocked because of excessive spam.
Use another domain or send PM.
Reply to
Joerg

I do! Thanks!

I am also rather fond of LOLPython:

formatting link
formatting link
formatting link

(This sure beats talking about the Larrabee graphics chip...)

--
Guy Macon
Reply to
Guy Macon

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.