Re: Overview Of New Intel Core i7(Nehalem) Processor

Do you find many programmers who are skilled in numerical mathods? I sure don't. We usually have to explain the actual algorithms to them.

John

Reply to
John Larkin
Loading thread data ...

I'm one. You wouldn't have to explain much at all to me, including the hardware design, sensor systems, and physics. Just throw the schematics at me, point towards an interesting paper or two, and pretty much walk away. I'll do the rest, with only a few questions about a few details that may escape me from time to time. I also come up with novel approaches, as needed, where clients don't know the solution but suspect there "must be an easier way." I can cite quite a few, if you care to know details about that.

Honestly, I've only known perhaps two others like me. But I have a small circle and live in Oregon. You live in an area where there should be a lot more, I'd have guessed.

Jon

Reply to
Jon Kirwan

That's the attraction of functional languages. You don't need explicit parallelising operators; whenever a function call has multiple arguments, the argument expressions can be evaluated in parallel.

Reply to
Nobody

formatting link

Reply to
MooseFET

On a sunny day (Sat, 13 Jun 2009 20:06:34 -0400) it happened Phil Hobbs wrote in :

Dear Dr Hopps, I dunno if this will ever be a success, but it seems Intel is working hard to make the thing Nobody referred to a reality:

formatting link

It is still beta, but so what.

Reply to
Jan Panteltje

Under very restrictive assumptions, including near-zero and deterministic latency and noninteracting processes (i.e. embarrassingly parallellizable problems), it is possible to autoparallellize well. In a more general environment, such as a cluster, and on more general problems (e.g. database updates) good luck.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal
ElectroOptical Innovations
55 Orchard Rd
Briarcliff Manor NY 10510
845-480-2058
hobbs at electrooptical dot net
http://electrooptical.net
Reply to
Phil Hobbs

That still requires explict support from the programmer (which is almost inevitble in an imperative language).

I'm talking about languages using parallelism automatically.

Reply to
Nobody

On a sunny day (Sun, 14 Jun 2009 16:31:20 +0100) it happened Nobody wrote in :

Well, first question is 'is parallelism possible'. Certainly not for just any problem in my view.

However there is a clear example of parallelism in nature: The neural net, we in fact are an example of that.

The 'language' to describe those nets may be graphical in nature. As far as the structure of the net goes, a lot of stuff then needs specifying. There are some nice open source program for playing with that.

As I have argued before, here, long time (or short compared to for example the age - of the universe - or most ages here except by the sound of it some... parallelism is not always possible. Or would bring no benefit, maybe even just bring overhead in the form of synchronisation. You could perhaps say that mathematics is a language that would allow parallelism, for example an equation could be split up, if the variables allow it, so parts of it can be pre-calculated on different cores, and that would give a bit of speedup, maybe a lot in some rare cases. Note that in the above link to Intel, some talk about 20% speed improvement. As for databases, I have argued here in the past that you can often get a lot more speed up by just doing parts in hardware, like adding a on-board FPGA. Those examples I have given in this group too. Then we talk about factors up to 100 x or so. So all this talk about multi cores - and more cores in a chip then the competition - is largely marketing. I dunno what Dr Hobbs is doing, I could not decipher his words as they were not in my neural database but perhaps other solutions could give him some speedup, solutions other then parallelism. As evolution goes, we will see more parallelism.. stuff, and more configurable hardware, and more complexity.

Reply to
Jan Panteltje

in fact are

age - of the universe - or

possible.

synchronisation.

parallelism,

of it

maybe a lot in some

It would be interesting to have a language that coupled to a big FPGA, or a lot of FPGAs, to use true (broadside clocked logic) parallelism.

more speed up by just

competition - is

in my neural database

parallelism.

hardware,

Can we never hope that more hardware resources will allow simplicity?

John

Reply to
John Larkin

Is a language the problem? An FPGA is just gates, flipflops, and wires. We have languages for that already. I think the missing part is the ideas.

FPGAs are already used for many problems that take advantage of their parallel clocking. DSP is the best example I can think of right now.

--
These are my opinions, not necessarily my employer\'s.  I hate spam.
Reply to
Hal Murray

On a sunny day (Sun, 14 Jun 2009 09:57:55 -0700) it happened John Larkin wrote in :

That depends on from what level you look at the problem. Take for example cellphones. In the long ago past it was incredibly difficult to speak to somebody far away, longer time ago it was even impossible.

From the user of those gadgets POV it has become as simple as saying the name of the person, and it will dial and make a connection automagically. From an engineering POV it has become a very very complex 'solution' to a problem, one that includes many different areas of technology, from batteries to modulation schemes, to encryption, to antenna techniques.

So for us, mere on the design side, it will become more complex, complicated. For those who use what we make it will become simpler perhaps. That means that some tools we use will be simpler to use :-) While those tools will be more complex.. Maybe our building blocks, like chips will be more complex, but simpler to connect together.. You will be using Linux in an embedded system I understood, now that is very complex, but what it does for the user of that system is increase functionality and make life easier (I hope). We are very complex, a lot more then a snail, and we move faster too, but moving is not more complex or rather does not feel more complex to us then to the snail likely. If we look at the cave-man, could he, with proper training, have understood electronics? Will the next human generations look at things we do now as extremely simple, as the brain evolved and things like programing and physics become easier for it? Rainman?

Reply to
Jan Panteltje

in fact are

the age - of the universe - or

possible.

synchronisation.

parallelism,

parts of it

speedup, maybe a lot in some

more speed up by just

competition - is

not in my neural database

parallelism.

configurable hardware,

Sure. You just have to be willing to waste them on a large scale.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal
ElectroOptical Innovations
55 Orchard Rd
Briarcliff Manor NY 10510
845-480-2058
hobbs at electrooptical dot net
http://electrooptical.net
Reply to
Phil Hobbs

Sounds good to me.

John

Reply to
John Larkin

years.

with

maintainable

they're

way).

be

function

which

bad).

computing a

kind

that

You are so on the list of posters that it are not safe to read with a mouthful.

Reply to
JosephKK

parallelisation.

order,

choice.

and=20

Have a blast with your idea. Write the compiler. Warning, the underlying hardware is usually strictly von Neumann procedural. =20

Wait a minute though, VHDL and Verilog are kind of declarative languages, maybe you can program a large FPGA instead to execute your Haskell directly somehow.

Reply to
JosephKK

That's the premise of using declarative languages for parallelism. Declarative languages are based upon mathematical formulations, and don't have mutable state or side-effects. Or at least they heavily constrain their use; you only use mutable state if you need to use it (e.g. I/O), not because it's the only way to get anything done.

Reply to
Nobody

Me too.

Computers are cheaper than people, and the useful lifespan of computer software is shrinking even faster than the performance/price ratio of hardware is increasing.

Even in general office use (PC + Windows + MS-Office), the software costs almost as much as the hardware. Once you start getting into specialised applications, the software can cost fifty times the hardware.

A language which allowed applications to be written in half the time at the expense of requiring double the processing power would be a net win in most cases. More generally: for time/N and CPU*N for quite a wide range of N, IMHO.

And any Luddites wanting to run hand-crafted asm/C would still reap the benefits of the "bloat" forcing hardware prices down.

Reply to
Nobody

Actually, modern hardware shares some similarities with lazy evaluation (used by most Haskell implementations), but it hides most of this to provide compatibility with the imperative model.

Reply to
Nobody

I still do embedded stuff in assembly, but mostly because I like it. I doubt that any other language would save me a lot of net time, because I spend more time figuring out what to do than I spend coding.

John

Reply to
John Larkin

BTW, I'm not knocking asm/C for embedded and systems programming.

I just think that the fact that C/C++ is the industry standard for writing multi-million-lines-of-code packages is crazy. Not necessarily a fault on the part of a specific entity or project, just the fact that the industry collectively hasn't managed to dig itself out of the hole.

Reply to
Nobody

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.