supercomputer progress

Lawrence Berkeley Lab announced the results from a new supercomputer analysis of climate change. They analyzed five west coast "extreme storms" from 1982 to 2014.

The conclusion from a senior scientist is that "it rains a lot more during the worst storms."

Reply to
jlarkin
Loading thread data ...

I'm surprised they even noticed that detail. Too bad they never talked to anybody over at the NOAA about how things work.

Reply to
Cydrome Leader

formatting link

---------<quote>----------------- Lawrence Berkeley National Laboratory scientists are looking to make highly detailed, 1 kilometer scale cloud models to improve climate predictions. Using current supercomputer designs of combining microprocessors used in personal computers, a system capable of making such models would cost about $1 billion and use up 200 megawatts of energy. A supercomputer using 20 million embedded processors, on the other hand, would cost about $75 million and use less than 4 megawatts of energy, according to Lawrence Berkeley National Laboratory researchers.

-------------<end quote>--------------

4 megawatts/200 megawatts - do the computers factor in their heat generation in the climate models?

John ;-#)#

Reply to
John Robertson

Does LBL measure energy in megawatts?

Do bigger computers predict climate better?

Oh dear.

Reply to
John Larkin

Probably don't have to bother. It's lost in the rounding errors.

No, but the media department won't be staffed with people with degrees in physics (or any hard science).

That remains to be seen, but modelling individual cloud masses at the 1km scale should work better than plugging in average cloud cover for regions broken up into 100km by 100km squares The IEEE Spectum published an article on "Cloud computing" a few years ago that addressed this issue.

John Larkin doesn't know much, and what he thinks he know mostly comes from Anthony Watts' climate change denial web site.

Reply to
Anthony William Sloman

On a sunny day (Tue, 26 Apr 2022 16:56:33 -0000 (UTC)) it happened Cydrome Leader snipped-for-privacy@MUNGEpanix.com wrote in <t49881$clq$ snipped-for-privacy@reader1.panix.com>:

There is a lot about need to publish Somebody I knew did a PhD in psychology or something He promoted on a paper about the sex-life of some group living in the wild. I asked him if he went there and experienced it...

No :)

if you read sciencedaily.com every day there are papers and things discovered that are either too obvious to read or too vague to be useful. Do plants have feeling? Do monkeys feel emotions? sort of things Of course they do. Today: Prehistoric People Created Art by Firelight of course they did, no flashlights back then in a dark cave.

Reply to
Jan Panteltje

On a sunny day (Tue, 26 Apr 2022 13:53:08 -0700) it happened John Larkin <jlarkin@highland_atwork_technology.com> wrote in snipped-for-privacy@4ax.com:

I have read CERN uses more power than all windmills together deliver in Switzerland.

Reply to
Jan Panteltje

Yes, that sounds correct. CERN uses about 200MW when everything is running. Switzerland has a little over 70MW of windmills installed. Of course, those never actually deliver 70MW. More like 25% of that, on average.

Most of CERN's electricity comes from the Genissiat dam in nearby France.

Jeroen Belleman

Reply to
Jeroen Belleman

In Jan's ever-so-expert opinion.

Anything published in the peer reviewed literature, get reviewed by people who do know something about the subject - the author's peers - who have to accept it as a useful and meaningful contribution.

Max Planck didn't bother sending out any of Einsteins 1905 papers for review. He had enough confidence in his own judgement not to bother, and he was right.

Depends what you mean by feelings.

Obviously they do.

That's all popular science. Peer reviewed science is rather more technical.

Reply to
Anthony William Sloman

Anthony William Sloman snipped-for-privacy@ieee.org wrote in news: snipped-for-privacy@googlegroups.com:

1nm scale not kilometer.

I want to marry this woman...

formatting link
Reply to
DecadentLinuxUserNumeroUno

I think the jury has already returned that there is climate change/global warming and it is probably already too late to do much about it with the short time needed for countries and people to react.

Especially with all the global warming denialists that don't care agout it and state of the art and science of generating non-greenhouse gas energy.

I suppose that I won't be around to see how bad it will get which could be a good thing.

I would love to have a super computer to run LTspice.

boB

Reply to
boB

At last! We'll all be dead in 8 years. I'd rather be drowned or blown away than bored to death.

Reply to
jlarkin

I thought one of the problems with LTspice (and spice in general) performance is that the algorithms don't parallelize very well.

Reply to
Dennis

On 2022-04-28 18:26, boB wrote: [...]

In fact, what you have on your desk *is* a super computer, in the 1970's meaning of the words. It's just that it's bogged down running bloatware.

Jeroen Belleman

Reply to
Jeroen Belleman

LT runs on multiple cores now. I'd love the next gen LT Spice to run on an Nvidia card. 100x at least.

Reply to
John Larkin

Even supercomputers from the 80s were not as fast as many of today's computers and the memory was often 16,000 times smaller than a typical laptop today.

Reply to
Ricky

The "number of threads" setting doesn't do anything very dramatic, though, at least last time I tried. Splitting up the calculation between cores would require all of them to communicate a couple of times per time step, but lots of other simulation codes do that.

The main trouble is that the matrix defining the connectivity between nodes is highly irregular in general.

Parallellizing that efficiently might well need a special-purpose compiler, sort of similar to the profile-guided optimizer in the guts of the FFTW code for computing DFTs. Probably not at all impossible, but not that straightforward to implement.

Cheers

Phil Hobbs

Reply to
Phil Hobbs

Indeed. The Cray X-MP in its 4 CPU configuration with a 105MHz clock and a whopping for the time 128MB of fast core memory with 40GB of disk. The one I used had an amazing for the time 1TB tape cassette backing store. It did 600 MFLOPs with the right sort of parallel vector code.

That was back in the day when you needed special permission to use more than 4MB of core on the timesharing IBM 3081 (approx 7 MIPS).

Current Intel 12 gen CPU desktops are ~4GHz, 16GB ram and >1TB of disk. (and the upper limits are even higher) That combo does ~66,000 MFLOPS.

Spice simulation doesn't scale particularly well to large scale multiprocessor environments to many long range interractions.

Reply to
Martin Brown

If it is anything like chess problems then the memory bandwidth will saturate long before all cores+threads are used to optimum effect. After that point the additional threads merely cause it to run hotter.

I found setting max threads to about 70% of those notionally available produced the most computing power with the least heat. After that the performance gain per thread was negligible but the extra heat was not.

Having everything running full bore was actually slower and much hotter!

I'm less than impressed with profile guided optimisers in compilers. The only time I tried it in anger the instrumentation code interfered with the execution of the algorithms to such an extent as to be meaningless.

One gotcha I have identified in the latest MSC is that when it uses higher order SSE2, AVX, and AVX-512 implicitly in its code generation it does not align them on the stack properly so that sometimes they are split across two cache lines. I see two distinct speeds for each benchmark code segment depending on how the cache allignment falls.

Basically the compiler forces stack alignment to 8 bytes and cache lines are 64 bytes but the compiler generated objects in play are 16 bytes, 32 bytes or 64 bytes. Alignment failure fractions 1:4, 2:4 and 3:4.

If you manually allocate such objects you can use pragmas to force optimal alignment but when the code generator chooses to use them internally you have no such control. Even so the MS compiler does generate blisteringly fast code compared to either Intel or GCC.

Reply to
Martin Brown

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.