Re: Intel details future Larrabee graphics chip

Well, first of all, Verilog has way less types. There are only bits, bit vectors, 32 bit integers, and floats. You can't use the latter for synthesis; usually only bits and bit vectors are used for register data types.

My experience is that people make way less errors in Verilog, because it's all straight-forward, and not many traps to fall in. E.g. a typical VHDL error is that you define an integer subrange from 0..F, instead of a 4 bit vector, and then forget to mask the add, so that it doesn't wrap around but fails instead.

My opinion towards good tools:

  • Straight forward operations
  • Simple semantics
  • Don't offer several choices where one is sufficient
  • Restrict people to a certain common style where the tool allows choices
--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
Reply to
Bernd Paysan
Loading thread data ...

Come on, when CPUs are almost free, dedicated IO CPUs are still a lot cheaper. You can have more of them in the same die area. They might still have the same basic instruction set, just with different performance tradeoff.

You might put a few fast cores on the die, which give you maximum performance for single-threaded applications. Then, you put a number of slower cores on it, for maximum multi-threaded performance. And then, another even slower and simpler type of core for IO.

When cores are cheap, it makes sense to build them for their purpose.

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
Reply to
Bernd Paysan

most

that

features

Can you give examples of such different interpretations? There are a few areas that people disagree about, but it often doesn't matter much.

Interestingly most code is widely portable despite most programmers having little understanding about portability and violating the C standard in almost every respect.

Actually you don't need any "autoconfiguring" in C. Much of that was needed due to badly broken non-conformant Unix compilers. I do see such terrible mess every now and again, with people declaring builtin functions incorrectly as otherwise "it wouldn't compile on compiler X"...

Properly sized types like int32_t have finally been standardized, so the only configuration you need is the selection between the various extensions that have not yet been standardized (although things like __declspec are widely accepted nowadays).

I've done a lot of porting and know most of the problems. It's not nearly as bad as you claim. Many "porting" issues are actually caused by bugs and limitations in the underlying OS. I suggest that your experience is partly colored by the fact that people ask you as a last resort.

Wilco

Reply to
Wilco Dijkstra

Right now, we have about 4800 different parts in stock, and about 600 parts lists (BOMs). Why use SQL on that? Why make a monster out of a simple problem? Searches are so fast you can't see them happen, and there's no database maintanance, no linked lists to get tangled, no index files.

John

Reply to
John Larkin

In article , "Wilco Dijkstra" writes: |> |> > |> It's certainly true the C standard is one of the worst specified. However most |> > |> compiler writers agree about the major omissions and platforms have ABIs that |> > |> specify everything else needed for binary compatibility (that includes features |> > |> like volatile, bitfield details etc). So things are not as bad in reality. |> >

|> > Er, no. I have a LOT of experience with serious code porting, and |> > am used as an expert of last resort. Most niches have their own |> > interpretations of C, but none of them use the same ones, and only |> > programmers with a very wide experience can write portable code. |> |> Can you give examples of such different interpretations? There are a |> few areas that people disagree about, but it often doesn't matter much.

It does as soon as you switch on serious optimisation, or use a CPU with unusual characteristics; both are common in HPC and rare outside it. Note that compilers like gcc do not have any options that count as serious optimisation.

I could send you my Objects diatribe, unless you already have it, which describes one aspect. You can also add anything involving sequence points (including functions in the library that may be implemented as macros), anything involving alignment, when a library function must return an error (if ever) and when it is allowed to flag no error and go bananas. And more.

|> Interestingly most code is widely portable despite most programmers |> having little understanding about portability and violating the C standard in |> almost every respect.

That is completely wrong, as you will discover if you ever need to port to a system that isn't just a variant of one you are familiar with. Perhaps 1% of even the better 'public domain' sources will compile and run on such systems - I got a lot of messages from people flabberghasted that my C did.

|> Actually you don't need any "autoconfiguring" in C. Much of that was |> needed due to badly broken non-conformant Unix compilers. I do see |> such terrible mess every now and again, with people declaring builtin |> functions incorrectly as otherwise "it wouldn't compile on compiler X"...

Many of those are actually defects in the standard, if you look more closely.

|> Properly sized types like int32_t have finally been standardized, so the |> only configuration you need is the selection between the various extensions |> that have not yet been standardized (although things like __declspec are |> widely accepted nowadays).

"Properly sized types like int32_t", forsooth! Those abominations are precisely the wrong way to achieve portability over a wide range of systems or over the long term. I shall be dead and buried when the 64->128 change hits, but people will discover their error then, oh, yes, they will!

int32_t should be used ONLY for external interfaces, and it doesn't help with them because it doesn't specify the endianness or overflow handling. And not all interfaces are the same. All internal types should be selected as to their function - e.g. array indices, file pointers, hash code values or whatever - so that they will match the system's properties. As in Fortran, K&R C etc.

|> > A simple question: have you ever ported a significant amount of |> > code (say, > 250,000 lines in > 10 independent programs written |> > by people you have no contact with) to a system with a conforming |> > C system, based on different concepts to anything the authors |> > were familiar with? I have. |> |> I've done a lot of porting and know most of the problems. It's not nearly |> as bad as you claim. Many "porting" issues are actually caused by bugs |> and limitations in the underlying OS. I suggest that your experience is |> partly colored by the fact that people ask you as a last resort.

Partly, yes. But I am pretty certain that my experience is a lot wider than yours. I really do mean different CONCEPTS - start with IBM MVS and move on to a Hitachi SR2201, just during the C era.

Note that I was involved in both the C89 and C99 standardisation process; and the BSI didn't vote "no" for no good reason.

Regards, Nick Maclaren.

Reply to
Nick Maclaren

On a sunny day (13 Aug 2008 14:32:44 GMT) it happened snipped-for-privacy@cus.cam.ac.uk (Nick Maclaren) wrote in :

I dare say you show clue-lessness

No, int32_t and friends became NECESSARY when the 32 to 64 wave hit, a simple example, and audio wave header spec: #ifndef _WAVE_HEADER_H_ #define _WAVE_HEADER_H_

typedef struct { /* header for WAV-Files */ uint8_t main_chunk[4]; /* 'RIFF' */ uint32_t length; /* length of file */ uint8_t chunk_type[4]; /* 'WAVE' */ uint8_t sub_chunk[4]; /* 'fmt' */ uint32_t length_chunk; /* length sub_chunk, always 16 bytes */ uint16_t format; /* always 1 = PCM-Code */ uint16_t modus; /* 1 = Mono, 2 = Stereo */ uint32_t sample_fq; /* Sample Freq */ uint32_t byte_p_sec; /* Data per sec */ uint16_t byte_p_spl; /* bytes per sample, 1=8 bit, 2=16 bit (mono) 2=8 bit, 4=16 bit (stereo) */ uint16_t bit_p_spl; /* bits per sample, 8, 12, 16 */ uint8_t data_chunk[4]; /* 'data' */ uint32_t data_length; /* length of data */ } wave_header;

#endif /* _WAVE_HEADER_H_ */

Now in the OLD version it used 'int', and when 'int' changed size again (in bits) of course the whole structure was different. I am so happy with uint8_t, if you are closer to hardware you will understand why. You may claim that that is an 'external' interface, so be it, but it is nice to be constantly aware of the width of variables.

The _t is great, as it is just that, that creates portability: From libc.info:

- Function: int fseeko (FILE *STREAM, off_t OFFSET, int WHENCE) This function is similar to fseek' but it corrects a problem with fseek' in a system with POSIX types. Using a value of type ong int' for the offset is not compatible with POSIX. fseeko' uses the correct type f_t' for the OFFSET parameter.

So, when we move to 128 bits (if ever), at least my programs should still work.

You know, I do not like people being pedantic, maybe because they often are right, and make your code look silly or bad. I try to code in the lowest subset common denominator of C, avoiding exotic constructs. That goes a long way, so far most compilers swallow it, but ultimately libc with libc.info is my reference. The whole world will soon move to Unix and gcc anyways, even John Larkin will learn C...

This posting contains forward looking statements that may or may not be true.

Reply to
Jan Panteltje

Way to go. Although Access could also do that with next to nothing in programming effort. Just set up the fields, some practical queries, the reports you regularly need, done.

--
Regards, Joerg

http://www.analogconsultants.com/

"gmail" domain blocked because of excessive spam.
Use another domain or send PM.
Reply to
Joerg

However most

ABIs that

features

reality.

Which particular loop optimizations do you mean? I worked on a compiler which did advanced HPC loop optimizations. I did find a lot of bugs in the optimizations but none had anything to do with the interpretation of the C standard. Do you have an example?

You have to give more specific examples of differences of interpretation. I'd like to hear about failures of real software as a direct result of these differences. I haven't seen any in over 12 years of compiler design besides obviously broken compilers.

in

I bet that most code will compile and run without too much trouble. C doesn't allow that much variation in targets. And the variation it does allow (eg. one-complement) is not something sane CPU designers would consider nowadays.

I did look closely at some of the issues at the time, but they had nothing to do with the standard, it was just working around broken compilers. There is also a lot of software around that blatantly assumes there is a directory with lots of headers but otherwise doesn't use any POSIX functions.

Not specifying the exact size of types is one of C's worst mistakes. Using sized types is the right way to achieve portability over a wide range of existing and future systems (including ones that have different register sizes). The change to 128-bit is not going to affect this software precisely because it already uses correctly sized types.

It's true that supercomputers of the past had wacky integer sizes and formats or only supported 64-bit int/double and nothing else. But these systems weren't designed to run off-the-shelf C, they were built to run FP code fast (ie. Fortran, not C). In any case I'm pretty certain my experience applies to a much larger market than yours :-)

Wilco

Reply to
Wilco Dijkstra

[...]

IMHO, C is only as bad as the programmer who uses it... BTW, what you think of C++? The Air Force seems to like it a lot for systems programming:

formatting link

I also believe that systems software in the Mars Lander was created in C.

Reply to
Chris M. Thomasson

with an Ada compiler. Pascal or Modula2

encourages.

I agree C is a bit easier to learn syntactically and so attracts a larger share of bad programmers. But in terms of types there isn't a huge difference - you can't assign incompatible pointers in C without a cast. One issue is that compilers don't give a warning when casts are likely incorrect (such as casting to a type with higher alignment).

a fairly strong case for only having

I doubt it. I've worked for many years on huge applications which use complex data structures with lots of pointers. I've seen very few pointer related failures despite using specialized memory allocators and all kinds of complex pointer casting, unions etc. Most memory failures are null pointer accesses due to simple mistakes.

There is certainly a good case for making pointers and arrays more distinct in C to clear up the confusion between them and allow for bounds checking.

randomly soldered to points on your

vital.

I guess you don't like pointers then :-)

Wilco

Reply to
Wilco Dijkstra

We need to do stuff like plan production: enter a list of top assemblies and quantities; look up all the top-level BOMs and break them down; break down all the subassemblies; come up with a total piece-parts count; consider what's in stock, min stock quantities, stuff on order, and decide what we need to buy. And other things that are specific to electronics manufacturing. Most of the commercial packages just don't get it, but they do want nasty license keys, per-seat charges, and annual maintenance costs.

I can toss our whole system onto a laptop in 30 seconds and take it with me on a field trip. Just copy the folder and run the single MAX.EXE file.

John

Reply to
John Larkin

C++? The Air Force seems to like it a

Exactly. You can write bad software in any language. Most Perl I've seen is far worse than average C.

C++ is a lot better than C if you stick to a reasonable subset. The more contentious parts are templates (mainly STL), exceptions and multiple inheritance.

C/C++ are widely used in the embedded world including in safety critical systems. Object oriented C++ is even used in most harddrives (they went from 100% assembler to 99% C++ without losing performance) and other realtime systems.

Some 13 years ago I wrote software for the Apache helicopter, all in C. What shocked me was not at all the language used but the lack of quality of the hundreds of pages of specifications and the programming capabilities of some collegues. Getting it all working wasn't easy due to the complex and time consuming ISO process used. One of the funny moments was when I spotted 5 mistakes in a 20-line function that calculated the log2 of an integer during a code review. I hope that wasn't their average bugrate!

Wilco

Reply to
Wilco Dijkstra

Hardware design keeps moving up in abstraction level too. I used to design opamps and voltage regulators out of transistors. Now I'm dropping sixteen isolated delta-sigma ADCs around an FPGA that talks to a 32-bit processor. That's sort of equivalent to building a software system using all sorts of other people's subroutines. We did just such a board recently, 16 channels of analog acquisition, from thermocouples to +-250 volt input ranges, all the standard thermocouple lookup tables, RTD reference junctions, built-in self-test, VME interface. No breadboards, no prototype. The board has

1100 parts and the first one worked.

Hardware design works better than software. One reason is that the component interfaces are better defined. Another reason is that we check our work - each other's work - very carefully before we ever try to build it, much less run it. Most engineering - civil, electrical, mechanical, aerospace - works that way. People don't hack jet engine cores and throw them on a test stand to see what blows up.

John

Reply to
John Larkin

My designs seems to go the other way. Yeah, also lots of delta-sigmas but even more transistor level designs. Main reason is that they often can't find anyone else to do it so it all lands on my pile.

But people do hack AGW "science" :-) SCNR.

--
Regards, Joerg

http://www.analogconsultants.com/

"gmail" domain blocked because of excessive spam.
Use another domain or send PM.
Reply to
Joerg

We still use discrete parts here and there, especially for the fast stuff, and high-power things. A board is typically a mix of high-abstraction parts - big complex chips - and a bunch of simpler stuff. "Glue logic", which actually does logic, is rare nowadays.

Lots of opamps and precision resistors. One good resistor can cost more than an opamp.

Our stuff isn't a cost-sensitive as some of yours, so we don't mind using an opamp if it works a little better than a transistor. And our placement cost is high, so we like to minimize parts count.

People keep talking about analog programmable logic...

John

Reply to
John Larkin

That's what I ran into a lot, whenever something is made domestically in a western country placement costs are through the roof. When I design circuits that will be produced on lines in Asia I can adopt a whole different design philosophy where replacing a $1 chip with 15 discrete jelly-bean parts makes a lot of sense.

With me it's the other way around. I am not allowed to talk about it ;-)

--
Regards, Joerg

http://www.analogconsultants.com/

"gmail" domain blocked because of excessive spam.
Use another domain or send PM.
Reply to
Joerg

In article , "Wilco Dijkstra" writes: |> |> > It does as soon as you switch on serious optimisation, or use a CPU |> > with unusual characteristics; both are common in HPC and rare outside |> > it. Note that compilers like gcc do not have any options that count |> > as serious optimisation. |> |> Which particular loop optimizations do you mean? I worked on a compiler |> which did advanced HPC loop optimizations. I did find a lot of bugs in the |> optimizations but none had anything to do with the interpretation of the C |> standard. Do you have an example?

I didn't say loop optimisations. But you could include any of the aliasing ambiguities (type-dependent and other), the sequence point ambiguities and so on. They are fairly well-known.

|> You have to give more specific examples of differences of interpretation.

As I said, I will send you my document if you like, which includes examples and explanations. Otherwise I suggest that you look at the archives of comp.std.c, which has dozens of examples. I don't have time to search my records for other examples for you.

|> I'd like to hear about failures of real software as a direct result of these |> differences. I haven't seen any in over 12 years of compiler design besides |> obviously broken compilers.

And I have seen hundreds. But I do know the C standards pretty well, and a lot of "obviously broken compilers" actually aren't.

|> I bet that most code will compile and run without too much trouble. |> C doesn't allow that much variation in targets. And the variation it |> does allow (eg. one-complement) is not something sane CPU |> designers would consider nowadays.

The mind boggles. Have you READ the C standard?

|> Not specifying the exact size of types is one of C's worst mistakes. |> Using sized types is the right way to achieve portability over a wide |> range of existing and future systems (including ones that have different |> register sizes). The change to 128-bit is not going to affect this software |> precisely because it already uses correctly sized types.

On the contrary. Look, how many word size changes have you been through? Some of my code has been through about a dozen, in succession, often with NO changes. Code that screws 32 bits in will not be able to handle data that exceeds that.

You are making PRECISELY the mistake that was made by the people who coded the exact sizes of the IBM System/360 into their programs. They learnt better, but have been replaced by a new set of kiddies, determined to make the same old mistake :-(

Regards, Nick Maclaren.

Reply to
Nick Maclaren

I'd certainly be interested in the document. My email is above, just make the obvious edit.

More than that. I've implemented it. Have you?

It's only when you implement the standard you realise many of the issues are irrelevant in practice. Take sequence points for example. They are not even modelled by most compilers, so whatever ambiguities there are, they simply cannot become an issue. Similarly various standard pendantics are moaning about shifts not being portable, but they can never mention a compiler that fails to implement them as expected...

Btw Do you happen to know the reasoning behind signed left shifts being undefined while right shifts are implementation defined?

It will work as long as the compiler supports a 32-bit type - which it will of course. But in the infinitesimal chance it doesn't, why couldn't one emulate a 32-bit type, just like 32-bit systems emulate 64-bit types?

Actually various other languages support sized types and most software used them long before C99. In many cases it is essential for correctness (imagine writing 32 bits to a peripheral when it expects 16 bits etc). So you really have to come up with some extraordinary evidence to explain why you think sized types are fundamentally wrong.

Wilco

Reply to
Wilco Dijkstra

There are several things in play here. More and more instruments have ports for memory cards, usb memory sticks, usb printer ports, IOW conventional UPNP style hot plug.

Then we are ever more using dynamic unit / core switch out if it makes a detected error, even at the sub chip level now.

Reply to
JosephKK

In real industry design flows things get more complicated, because almost all of the Verilog flows seem to suggest the use of Lint type of tools (actually they do much more than just rudimentary language checks). Those tools do some of the checks that VHDL type system does during the compilation as a part of the language.

--Kim

Reply to
Kim Enkovaara

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.