Graphics card fans

Why do graphics cards only monitor the speed of one fan? If the other one fails, it won't know!

Reply to
Commander Kinsey
Loading thread data ...

Are graphic cards used mostly for games? And maybe bitcoin mining? It's weird that one PC can contain more compute power than existed on Earth in 1970, and be used for games.

I suggested to Mike E that LT Spice should use a graphic card for computation, but I guess that's not going to happen now.

A modest Windows PC can spin Solidworks 3D images around just fine.

--

John Larkin         Highland Technology, Inc 

Science teaches us to doubt. 

  Claude Bernard
Reply to
jlarkin

Is SPICE trivially parallelize-able in that way?

Graphics cards have thousands of compute cores. Most operations on 3D mesh vertices and pixel "shading" are trivially parallelize-able; the operation pipeline is programmed for a given task and then each core runs its algorithm on a given vertex or pixel of the scene without needing any information from the others.

Early graphics cards didn't have re-programmable pipelines there was a somewhat fixed set of operations with some configurable options that could be applied in series to vertices and pixels.

Modern GPU code is written in a dialect of C with some features not relevant to single instruction multiple data operations removed, like pointers

Reply to
bitrex

GPUs are limited to single-precision floating point IIRC.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

Every frame if the scene is in motion the lighting reflections and shadows have to be re-computed, as one example.

Reply to
bitrex

Was true circa 2005.

Many modern GPUs can do double precision floating point, how well any particular one does it depends on the particular architecture, though.

Reply to
bitrex

That is to say they can do double but they're in general not optimized for it.

Reply to
bitrex

Am 14.07.20 um 17:50 schrieb bitrex:

No. Inverting the conductivity matrix is hard because you cannot do the pivoting in advance. The necessity shows up during work.

For transient analysis, every time step builds on the previous one(s) and you cannot parallelize a lot of them because you don't know the starting condition of the future ones.

It has been tried often, a working solution would have been worth gold. I remember the Weitek array coprocessor back in 80386 times and a try with the NS16032. They never got a factor of more than 2 or 3.

Everything really interesting is np-complete. :-(

Cheers, Gerhard

Reply to
Gerhard Hoffmann

That's what I figured.

There are probably ways to leverage GPUs in the process somehow but I expect it's going to be a 3 or 4 times speedup over using a general purpose CPU not like a ten thousand times speedup in the way rendering scenes is.

Reply to
bitrex

Another problem of practical value that's NP-complete is the pen-plotter problem or the "postal-route inspection" problem; how do you connect vertices of a vector image with lines such that the total Manhattan distance the plotter head covers in the process is minimal.

as opposed to the shortest path problem on directed and directed graphs, exact solution to that one is np-complete. There are heuristics that do pretty good

Reply to
bitrex

it's similar but distinct from the traveling salesman problem

Reply to
bitrex

LT Spice can already use multiple cores, so something is parallel-izable. The petaflop computers, used for weather and physics simulation, have thousands of CPUs.

Spice is usually fine, but once in a while I want 1000x or so more speed.

--

John Larkin         Highland Technology, Inc 

Science teaches us to doubt. 

  Claude Bernard
Reply to
jlarkin

GPU cores aren't general-purpose CPUs, they're serial-pipelined and optimized for SIMD-instructions.

A mutlicore general-purpose CPU has good cache coherency there's a fast on-die cache for all 4 or 8 cores or w/e that any of the processors can look at data the others are working on quickly.

It's hard to achieve that kind of cache coherency with thousands of cores. If core #127 needs to "see" what core #562 is working on it usually has to go out to video RAM. which is not nearly as fast as on-die cache.

Reply to
bitrex

People with advanced credentials in e.g. meteorology or computational biology or physics plus the computer science of optimizing multiprocessing systems get paid the biggo buckos

Reply to
bitrex

It probably could be, if you changed the scheme so as to impose a speed-of-light propagation limit. That way you could divide the schematic up into chunks, do time steps locally, and then propagate the changes to adjacent chunks.

That gets rid of every node having to know about every other node on every time step, and makes FDTD codes such as my POEMS facility parallelize well. (It works that way.)

Linear algebra also can be made to vectorize well on the right hardware.

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC / Hobbs ElectroOptics 
Optics, Electro-optics, Photonics, Analog Electronics 
Briarcliff Manor NY 10510 

http://electrooptical.net 
http://hobbs-eo.com
Reply to
Phil Hobbs

Some are. I always buy the ones that are, since Milkyway at home loves them.

Reply to
Commander Kinsey

But much better games than you could play in 1970.

And there's distributed computing - see Folding at home, Einstein at home, Milkyway at home, etc.

And many normal programs use graphics cards aswell nowadays - even stuff like Photoshop.

Some programs can't.

Reply to
Commander Kinsey

I suppose one has to buy the "Pro" variant rather than the gamer/consumer variant.

Moychendizing, moychendizing

Reply to
bitrex

I get them second hand, so I don't know what they were aimed at originally. Four of them are R9 280X. I thought those were high end games cards in their day. It could have been before they stopped putting double in everything. But it could be that the double precision ones are made in smaller quantities, so you don't get the mass production discount. Or it could be that it's more expensive to make them. But the reason they make them without is that games don't make use of double precision. Less double processing means space for more single processing on the die.

Reply to
Commander Kinsey

This was my proposal at Z80 / AM9511/AM9512 times, just one node per square mm of DUT chip. I also tried to port Spice 2G6 to Interactive Unix on my 80286/287 Bullet board. What a fiasco.

64K segments conspiring with f2c as a Fortran compiler. Never could have worked.

But this computer had 2 MB and a 70 MB Fujitsu disk. That was pure hubris in the hands of a EE & CS student. Our VAX11 at the semiconductor institute had 2 300 MB Fujitsu Eagles for all people together. :-) And we made real chip designs on it.

Later I had a T800 transputer cluster, that would have mapped nicely to this problem. But I never could find a customer for any T800 solution I proposed. All went X86.

The only exception was smuggling a Parsytec cluster to east Berlin. But little Gerhard did not dare to. Few did I know. Some weeks later, all the sudden, was the German re-unification and nobody would have cared anymore about smuggling technology to an east-German railway company that went belly-up anyway. Sigh.

Gerhard

Reply to
Gerhard Hoffmann

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.