576 cores, or 192 per board

formatting link
GTC conference

formatting link
-see tekra k1 board

formatting link
- see introduction to parallel programming - free, online

The Kepler architecture, building block of the US fastest supercomputer, was originally a GPU (graphics), but was adapted to general scientific problems via a process GPGPU - general programming on graphics processing units.

Nvidia makes the IC with 192 cores on board - 3 of them can be easily strung together for 576 cores. That is currently $576 retail, coincidentally. (or not)

The key to making this work is the c compiler which automagically does the parallelism. This then has been used to build a Python interface as well. These language tools were developed as part of the supercomputing effort.

These will run neural networks very well, as the nets require dot products, which the cores feast on. Graphics pixel processing, machine learning, and data exploration are being pursued energetically.

It turns out the key to making this technology work is to train programmers and scientists how to use it. The course ay udacity, given by experts, fills that need. This is free, and a Stanford Prof. plus Nvidia developer give the course.

I believe there are 60,000 people taking it at any one time.

SO that's it. My interest is to see what I/O capabilities there are. Nvidia claims high speed camera interfaces.

Reply to
haiticare2011
Loading thread data ...

Far too few cores for even US SD TV compression :-), assuming one core for each 16 x16 pixel macroblock.

For 1920 x 1080 HD and MPEG4 with 1920 x 16 slices, there are 67,5 slices in the picture, so you still have to use one core for a 1/3 slice, thus each core would have to handle 640 x 16 pixels 24-60 times each second.

Reply to
upsidedown

OTOH, with sufficient computing power with multiple cores, you could do a decent object recognition (border detection) and based on that generate high quality motion vectors.

Of course, this requires a good frame rate at the camera, say 100-300 Hz (which have photon noise issues), send full pictures at slow rates (1-2 pictures/s) and intermediate movement vectors and at the display, generate the intermediate frames from the base picture and motion vectors (the easy part with good quality motion vectors :-).

So if you have sufficient computing power at the source, you can significantly reduce the transfer capacity through compression.

Reply to
upsidedown

I don't know - they have connected hundreds of these together in some cases. It seems to be getting a lot of attention.

Reply to
haiticare2011

Is there a measure of how many grains of silver halide aer exposed er second?

Reply to
Robert Baer

When JPEG and MPEG were introduced, the available computing power of the day allowed only 8x8 pixel luminance blocks and 16x16 pixel (half resolution) chrominance DCT to be used. With current computing power, much larger DCT blocks could be used, possibly doing a single DCT for the whole picture frame. Throwing away high level DCT coefficients and selecting shortest bit sequences for the most common coefficients, giving some global optimized values instead of trying to optimize with

8x8 or 16x16 pixel blocks.

The MPEG standards are quite primitive, they just try to compress pixel illumination levels and most motion compensation is just controlling pixel group "formation flying".

Some dialects of the MPEG4 standard already defines sending multiple separate objects, such as foreground and background. Of the background object only those pixels are sent in the full picture that is visible during the first full picture until the next full picture. Pixels never visible during this interval is not sent.

The foreground object (such as a human) is sent once and movement vectors are applied to this object. At the receiver end, the foreground object is moved across the background object, hiding some background pixel on one end and revealing other pixels on the opposite side. However, since the revealing side pixels have already been transmitted in the previous background frame, there is no need to transmit any information what is happening on the revealing edge.

This kind of transmission would be ideal for animated pictures, since the objects are handled as separate object during production and only fused together at release. Sending the Z-distance for each object would also allow the generating 3D views at the receiver end, without using twice the bandwidth.

For real world objects, this is much harder to categorize pixels into objects and determine their Z-distance (even with a 3D camera), but with sufficient processing power, this should be done.

I did some image processing and object classification in the 1980's with 10 MHz 286 processor with EGA/VGA resolution, which took several seconds, but I would assume this could be done these days at 50/60 Hz video frame rate.

Reply to
upsidedown

Good question. I have seen that somewhere. I believe it is known what the size of the AgBr crystals is, but the deposition of Ag in development is empirical. One way is to just compare with digital cameras. This is just guesstimate. I have used good digial cameras quite a bit. (Nikon, Canon). I'd guesstimate that a silver emulsion can do 30 megapixels per square inch per second, at a 16 exposure resolution.

That would be 60 mB / sec.

Why do you ask?

Reply to
haiticare2011

This exactly what Nvidia is up to, on their biz side. I believe they want to put VR on a tablet. Personally, I am interested in their super-computer side. As far as VR while driving, I recommend living like the Amish.

Reply to
haiticare2011

er

the size

empirical.

guesstimate. I

guesstimate that

16

That is one number. I start with old Kodacolor 25 which gives about 1 gigapixel per square inch at 16 bits for each of r, g, and b; and one frame per second easy. 12 GB/s.

Resolution, speed and color accuracy are all traded off against each other for various films and processes.

?-)

Reply to
josephkk

hmm interesting. The power of parallel processing. :) My number no doubt way too low. hmmm 1 gigapiexel per sq in. How many lines would that translate to?

jb

Reply to
haiticare2011

over 31622 per inch. or about 1245 per mm.

"135" film frames are 24mm in the short direction so nearly 30K. Many more on "127" film, even the "110" film used in pocket cameras would give 16K lines (if you could get 25 ASA film for them - they prety much all used 400 ASA (or higher) which has loweer resolution)

--
umop apisdn 


--- news://freenews.netfront.net/ - complaints: news@netfront.net ---
Reply to
Jasen Betts

Those numbers sound far too large.

Figures for good lenses and filmes are about 100 lp/mm (line pairs/mm). A line pair is one black and one white line, so that would be somewhat similar to 200-300 "pixels"/mm.

Reply to
upsidedown

That is way over the top. Kodacolour 25 or Kodachrome 25 was good for about 16Mpixels per square inch at no more than 12 bits and probably more like 10bits so 60MB/sq inch per frame is about right. Faster films would carry less detail and slower ones more. Microfilm or holographic plate would allow very high information content but also insanely long exposure times so you hit a brick wall in actual data rate.

Plenty of standard cinema cameras could do 30fps and the fastest were upto 2,500fps with a high speed rotating prism camera. So I reckon the peak film datarate in that era assuming colour film and good exposure with lots of light was around 160GB/s. It was very specialist kit to do this and 2GB/s would be more typical 35mm cinema. Comparatively few expensive lenses in that era could support sharp achromatic imaging at the full resolution of the film.

I suspect the fast cameras mostly used something fast like Tri-X with correspondingly shorter exposure times and lower resolution.

--
Regards, 
Martin Brown
Reply to
Martin Brown

Well, I was quietly skeptical of actual resolution that could be attained. At one time, I was engaged in an invention to map the contour of the foot via moire patterns. So I tried to achieve high line pairs in a 35 mm camera, but as I remember, I hit a wall at only 200 lp/inch.

But the OP for this question was estimating theoretical capacity, so my lens system was probably a limit.

Reply to
haiticare2011

[snip]

I could also cite Kodak whose professional PhotoCD scanning service used

24Mpixel for 35mm 64Base with proprietory PhotoCD imagepack YCC compression. This is in the ballpark of 60Mbytes of raw RGB data. In practice there was very little real image data in the 64Base compared to the 16Base image of 6Mpixels mostly the rest was grain noise additional expense and difficulties handling the large image data.

That is rather poor at only 8 lp/mm. Modern lenses are good for 50lp/mm at their optimum settings and 30 lp/mm for any decent lens wide open. eg

formatting link

A pinhole camera at 50mm would be about 0.2mm diameter f200 and give a resolution of about 2 lp/mm (subject to back of envelope slips).

--
Regards, 
Martin Brown
Reply to
Martin Brown

1

one

other

doubt way

translate to?

doubt way

translate to?

Ordinary films perhaps, Kodacolor 25 traded most everything else for resolution and color accuracy. i am quite sure it did at least 1000 lines per mm. It has/had the finest particle size by far.

?-)

Reply to
josephkk

I tried to locate any direct claims about 1000 lines/mm and what MTF (Modulation Transfer Function) that reading is taken for that film stock, but without success. Most likely the MTF was so low that the more or less gray lines were barely visible.

The need for 3 x 16 bit at that nominal resolution is clearly an overkill. Perhaps a few bits/layer at full resolution would be appropriate. Since the grains in different color layers are not aligned, the 3x layer multiplier is too much. Perhaps 1 byte/pixel would be more realistic at nominal resolution, thus with the original claim of 1 Gpixel/sq inch, this would be 1 GB, at later claim of 1000 lines/mm about 625 MB.

Looking at some WWII footage, the motion picture looks pretty good, but when showing a freeze frame, the picture looks pretty grainy, so for motion pictures, the multiple frames hide quite well the grain noise in individual frames (grains in different positions in different frames).

For effective compression for digital storage or transmission, you really need to get rid of this grain noise, which requires quite a lot of processing power, in which a multicore processor will help.

Reply to
upsidedown

Not true the 3 ASA and 6 ASA microfilms were finer but required specialist kit to use to anything like their full performance.

I can't find an MTF curve for KC 25 but I would be surprised if it went much above 350lp/mm and could well have been worse. There were a handful of fine grain monochrome films that could do better but in colour you were always hampered by diffusion of the dyes.

KK25 slide film was a special case because the colour formers were already locked into the film and not able to diffuse.

You are an order of magnitude too optimistic.

Can't find any Kodak datasheets online but here is Fujis pro 160ASA film and Kodaks 25 would be no better than 3x linear resolution.

formatting link

KK25 and KC25 were usually good for about 24Mpixel tops with high end gear and the most favourable conditions that is ~100lp/mm. And about what the best lenses can deliver in practice over most of a frame.

Kodak's professional scanning service for 35mm was 24Mpixel 64base .PCD.

--
Regards, 
Martin Brown
Reply to
Martin Brown

I doubt you will find it because at least for KC 25 it is flat out wrong. It might be as high as perhaps 350lpm on a good day with the right sort of lens but they were as rare as hens teeth back then.

My instinct is that he has misremembered and it was 100lp/mm which is an entirely realistic number for the MTF of historic film media. Also the dynamic range was seldom more than 10-12 bits on colour film.

Here is the best that Kodak can manage today in the film (movie) business (tails off after 100lp/mm) :

formatting link

Silver halide slow B&W could support a much higher dynamic range and had an mtf that would go very much higher at the cost of speed.

--
Regards, 
Martin Brown
Reply to
Martin Brown

Kodachrome, not Kodacolor. Kodachrome 25 had higher resolution because the emulsion was so thin--it was just layers of silver halide sensitized differently, and the dye was put in during processing. Print film and ordinary slide film (e.g. Ektachrome) had "colour couplers" in the emulsion which turned into dye during development.

That's why you could process Ektachrome yourself, but not Kodachrome.

Kodachrome 25, RIP.

Phil "Makes all the world a sunny day" Hobbs

--
Dr Philip C D Hobbs 
Principal Consultant 
ElectroOptical Innovations LLC 
Optics, Electro-optics, Photonics, Analog Electronics 

160 North State Road #203 
Briarcliff Manor NY 10510 

hobbs at electrooptical dot net 
http://electrooptical.net
Reply to
Phil Hobbs

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.