Image Sensor Interface.

Hi, I am planning to read an image sensor using an FPGA but I am a little confused about a bunch of things. Hopefully someone here can help me understand the following things:

Note: The image sensor output is an ANALOG signal. Datasheet says that the READOUT clock is 40MHz.

  1. How is reading of an image sensor using an ADC different then reading a random analog signal using an ADC?

- Any random signal is read using nyquist theorem that is sample the signal @ 2 times the highest frequency. And the amount of data or memory required can be calculated using: Sampling rate x ADC resolution

- This is different in case of an image sensor ? Why ? Because each pixel output is an analog signal and all of that signal gets converted into a digital value ? Do I use an ADC running at 40 MSamples/second since the pixel output 40 MHz ? How do I calculate the required memory ?

Is it simply 40 MS/s x 16 bits (adc resolution) for each pixel or just 16 bits per pixel ? If each frame is 320 x 256 then data per frame is - (320x256) x

16 bits, why not multiple this by 40 MS/s like you would for any other random analog signal ?

Thanks,

Reply to
ertw
Loading thread data ...

l
x

Just realized after posting ... is it because for the image sensor I am only reading the amplitude using the ADC ? as opposed to any other random signal where the whole signal is sampled at different intervals ?

Reply to
ertw

It somewhat depends on whereabouts in the sensor's output signal processing chain you expect to pick up the signal. Is this a raw sensor chip that you have? Is it hiding behind a sensor drive/control chipset? Is it already packaged, supplying standard composite video output?

You're right to question this. Of course, at base it isn't - it's just a matter of sampling an analog signal. But the image sensor has some slightly strange properties. First off, the analog signal has already been through some kind of sample- and-hold step. In an idealised world, with a 40 MHz readout clock, you would expect to see the analog signal "flat" for

25ns while it delivers the sampled signal for one pixel, and then make a step change to a different voltage for the next pixel which again would last for 25ns, and so on.

In the real world, of course, it ain't that simple. First, you have the limited bandwidth of the analog signal processing chain (inside the image sensor and its support chips) which will cause this idealised stair-step waveform to have all manner of non-ideal characteristics. Indeed, if the output signal is designed for use as an analog composite video signal, then it will probably have been through a low-pass filter to remove most of the staircase-like behaviour. Second, even before the analog signal made it as far as the staircase waveform I described, there will be a lot of business about sampling and resetting the image sensor's output structures.

In summary, all of this stuff says that you should take care to sample the analog signal exactly when the camera manufacturer tells you to sample it, with the 40 MHz sample clock that they've so thoughtfully provided (I hope!).

Of course it is not different. If you get 16 bits, 40M times per second, then you have 640Mbit/sec to handle.

If the camera manufacturer gives you a "sampled analog" output and a sampling clock, then yes. On the other hand, if all you have is a composite analog video output with no sampling clock, you are entirely free to choose your sampling rate - bearing in mind that it may not match up with pixels on the camera, and therefore you are trusting the camera's low-pass filter to do a good job of the interpolation for you.

eh?

Only the very highest quality cameras give an output that's worth digitising to 16 bit precision. 10 bits should be enough for anyone; 8 bits is often adequate for low-spec applications such as webcams and surveillance.

I have no idea what you mean. 40 MHz is the *pixel* rate. Let's follow that through:

40 MHz, 320 pixels on a line - that's 8 microseconds per line. But don't forget to add the extra 2us or thereabouts that will be needed for horizontal synch or whatever. Let's guess 10us per line. 256 lines per image, 10us per line, that's 2.56 milliseconds per image - but, again, we need to add a margin for frame synch. Perhaps 3ms per image.

Wow, you're getting 330 images per second - that's way fast.

But whatever you do, if you sample your ADC at 40 MHz then you get 40 million samples per second!

~~~~~~~~~~~~~~~~~~~~~~~

More questions:

What about colour? Or is this a monochrome sensor?

Do you get explicit frame and line synch signals from the camera, or must you extract them from the composite video signal?

Must you create the camera's internal line, pixel and field clocks yourself in the FPGA, or does the camera already have clock generators in its support circuitry?

~~~~~~~~~~~~~~~~~~~~~~

You youngsters have it so easy :-) The first CCD camera controller I did had about 60 MSI chips in it, an unholy mess of PALs, TTL, CMOS, special-purpose level shifters for the camera clocks (TSC426, anyone?), sample-and-hold and analog switch devices to capture the camera output, some wild high-speed video amplifiers (LM533)... And the imaging device itself, from Fairchild IIRC, was only NTSC-video resolution and cost around $300. Things have moved on a little in the last quarter-century...

--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which 
are not the views of Doulos Ltd., unless specifically stated.
Reply to
Jonathan Bromley

Nyquist relates to sinusoids and periodicity in the signal. The sampling period as it relates to Nyquist with your image sensor is the frame rate, not the pixel clock/ADC sample rate. The two are not related in a meaningful way. Fuhget about it.

While reading Proakis, I remember distinctly thinking mathematics is the wrong language to impart an intuitive grasp of some topics for most folks. Discrete time signals would top my list of examples. (How do you take something so conceptually simple, and fill 120 pages with dense prose? Somebody should know the details, in all its glorious minutiae, if only to pass it to the next generation. But how much of it is useful to a practicing engineer?)

Intuitively, you are capturing image frames. The pixel content makes sense only in context of the frame. Calculate the memory required to hold a complete frame.

Because each sample is at most one pixel, not an entire 320x256x16 frame buffer.

Reply to
MikeWhy

MikeWhy wrote: (snip)

Yes, Nyquist is completely unrelated to the signal coming out of an image sensor, but it is important in what goes in.

Specifically, the image sensor samples an analog (image) in two dimensions, and, for the result to be correct the image itself must not have spatial frequencies at the sensor surface higher than half the pixel spacing. Sometimes one trusts the lens to do that, others an optical low pass filter is used.

-- glen

Reply to
glen herrmannsfeldt

g

Guys, Thanks a lot for the help. Jonathan your explanation was great ...

Answers to the questions you asked -

- Its a monochrome sensor

- I do get explicit frame and line signals from the sensor

- Sensor does not have any clock generating circuitary (I have to provide the clock, or pixel clock to the sensor, not sure if I was clear about that in the previous post).

I have a few more questions regarding data storage and processing (I think the readout from the sensor is a little clear in my head now).

The sensor is a packaged Integrated circuit with processing applied to the final stage analog signal (thats where I am planing to read it using an ADC).

The output is actually 4 differential signals (one for each column) meaning I will need four ADCs (all four video outputs signals come out simultaneously). The resolution that I want is 16 bits.

Now, that means I have four parallel channels of 16 bits coming into the FPGA every 25 ns that I need to store somewhere. The total data per frame is: (320 x 256) x 16 bits =3D 1310720 bits/frame OR 163840 Bytes/frame or

160 KBytes / frame.

Do you think I can store that much within a xilinx FPGA. I am trying to do 30 frames per seccond which means I have roughly 33 ms per frame but using 40 MHz clock each frame can be read out in 512 microseconds with a whole lot of dead time after each frame (unless I can run the sensor at a slower pixel clock).

The idea is to transfer data over the pci bus to the computer and I cant go over 133 Meg transfers per second. Since I am reading 4 channels @ 40 MHz that works out to be 160 Mbits per second so not possible to transfer the data on fly over the bus (unless I am misunderstanding something). Is there a way to transfer data on the fly over the pci bus other than slowing the pixel clock ?

Or how can I effeciently transfer the data data over the bus (even if I have to store and then use a slower clock to transfer the data out).

Reply to
ertw

Guys, Thanks a lot for the help. Jonathan your explanation was great ...

Answers to the questions you asked -

- Its a monochrome sensor

- I do get explicit frame and line signals from the sensor

- Sensor does not have any clock generating circuitary (I have to provide the clock, or pixel clock to the sensor, not sure if I was clear about that in the previous post).

I have a few more questions regarding data storage and processing (I think the readout from the sensor is a little clear in my head now).

The sensor is a packaged Integrated circuit with processing applied to the final stage analog signal (thats where I am planing to read it using an ADC).

The output is actually 4 differential signals (one for each column) meaning I will need four ADCs (all four video outputs signals come out simultaneously). The resolution that I want is 16 bits.

Now, that means I have four parallel channels of 16 bits coming into the FPGA every 25 ns that I need to store somewhere. The total data per frame is: (320 x 256) x 16 bits = 1310720 bits/frame OR 163840 Bytes/frame or

160 KBytes / frame.

Do you think I can store that much within a xilinx FPGA. I am trying to do 30 frames per seccond which means I have roughly 33 ms per frame but using 40 MHz clock each frame can be read out in 512 microseconds with a whole lot of dead time after each frame (unless I can run the sensor at a slower pixel clock).

The idea is to transfer data over the pci bus to the computer and I cant go over 133 Meg transfers per second. Since I am reading 4 channels @ 40 MHz that works out to be 160 Mbits per second so not possible to transfer the data on fly over the bus (unless I am misunderstanding something). Is there a way to transfer data on the fly over the pci bus other than slowing the pixel clock ?

Or how can I effeciently transfer the data data over the bus (even if I have to store and then use a slower clock to transfer the data out).

Reply to
ertw

el

) x

Guys, Thanks a lot for the help. Jonathan your explanation was great ...

Answers to the questions you asked -

- Its a monochrome sensor

- I do get explicit frame and line signals from the sensor

- Sensor does not have any clock generating circuitary (I have to provide the clock, or pixel clock to the sensor, not sure if I was clear about that in the previous post).

I have a few more questions regarding data storage and processing (I think the readout from the sensor is a little clear in my head now).

The sensor is a packaged Integrated circuit with processing applied to the final stage analog signal (thats where I am planing to read it using an ADC).

The output is actually 4 differential signals (one for each column) meaning I will need four ADCs (all four video outputs signals come out simultaneously). The resolution that I want is 16 bits.

Now, that means I have four parallel channels of 16 bits coming into the FPGA every 25 ns that I need to store somewhere. The total data per frame is: (320 x 256) x 16 bits =3D 1310720 bits/frame OR 163840 Bytes/frame or

160 KBytes / frame.

Do you think I can store that much within a xilinx FPGA. I am trying to do 30 frames per seccond which means I have roughly 33 ms per frame but using 40 MHz clock each frame can be read out in 512 microseconds with a whole lot of dead time after each frame (unless I can run the sensor at a slower pixel clock).

The idea is to transfer data over the pci bus to the computer and I cant go over 133 Meg transfers per second. Since I am reading 4 channels @ 40 MHz that works out to be 160 Mbits per second so not possible to transfer the data on fly over the bus (unless I am misunderstanding something). Is there a way to transfer data on the fly over the pci bus other than slowing the pixel clock ?

Or how can I effeciently transfer the data data over the bus (even if I have to store and then use a slower clock to transfer the data out).

Reply to
ertw

Which do you mean? Two pixels is Nyquist critical. Half pixel aliasing is a spatial resolution problem, not a spectral aliasing (Nyquist) issue.

Reply to
MikeWhy
160 KBytes / frame.

Do you think I can store that much within a xilinx FPGA. I am trying to do 30 frames per seccond which means I have roughly 33 ms per frame but using 40 MHz clock each frame can be read out in 512 microseconds with a whole lot of dead time after each frame (unless I can run the sensor at a slower pixel clock).

========= A block RAM FIFO comes to mind. Maybe even 4 of them, one for each column stream. Search the docs for BRAM.

The frames are small enough, and 33ms is long enough that you likely won't need to double buffer. For example, buffering it in larger, slower memory to allow for bus contention.

Reply to
MikeWhy

Sure, I've clicked the shutter a few times. I was even around when Sigma splatted in the market with the Foveon sensor. All the same, Bayer aliasing isn't related to Nyquist aliasing and sampling frequency. The OP needn't concern himself with Nyquist considerations. Yes?

Reply to
MikeWhy

MikeWhy wrote: (snip regarding Nyquist and image sensors)

It isn't usually as bad as audio, but an image with a very high spatial frequency can alias on on image sensor. (Usually called Moire for images. Aliasing can also cause color effects based on the pattern of the color filters on the sensor.)

formatting link

-- glen

Reply to
glen herrmannsfeldt

Obviously, from a techology and signal-processing point of view they live in different worlds. But I don't really see what's so different between thinking about spatial frequency and thinking about temporal frequency.

But then, part of my problem is that I learned how to think about the frequency/time or spatial-frequency/distance duality not through engineering, but physics: if I want to understand what a convolution is doing, my first resort even today is to think about optical transforms.

Absolutely agreed that the OP probably has no control at all over the spatial bandwidth (MTF) and spatial sampling concerns of his image sensor/lens combination.

--
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which 
are not the views of Doulos Ltd., unless specifically stated.
Reply to
Jonathan Bromley

Sure, you want 16 bits, but what is the signal to noise ratio that the = =

sensor actually delivers ? If your sensor only has say, 10 bits of =

precision, then your expensive 16 bits ADCs are wasted.

Well your pixel clock is going to depend on the bandwidth and settling = =

time of the analog path. Your settling time requirements depend on the =

number of bits you actually want. If you want more precision it always =

takes longer to settle. Don't forget this in the design of your analog =

path.

actually it's 133 megabytes/s, but 33 megatransfers/s since one transfe= r =

is one 32 bit word ie 4 bytes

you can either use a fast pixel clock and a large FIFO, or a slower pix= el =

clock and no FIFO.

But if you only want 30 fps, which is quite small, and a small resoluti= on =

of 320x240, this is only 2.3 million pixels per second. So you can use a= =

pixel clock of (say) 1 MHz, with outputs 4 pixels every microsecond as y= ou =

said, and then a 4-input muxed ADC, instead of 4 ADCs (much cheaper) =

taking 4 million samples/s. In this case you are twice as fast as =

necessary which will allow you to use up to half the frame time as =

exposure time on your sensor. If you need longer exposures you will need= a =

faster pixel clock to allow more time for exposure and less time for dat= a =

handling.

Now if you want to make high-speed recording (like 300 fps) to take =

videos of bullets exploding tomatoes you'll need to use the fastest pixe= l =

clock you can get and also very powerful lights. But if you only need a = =

slow 30 fps you don't need to use expensive analog parts and ADCs.

To be efficient you need burst transfers so you will always need some =

form of FIFO somewhere, and DMA.

Note that since your throughput is quite small you could use USB instea= d =

of PCI which would allow more freedom in locating the camera further fro= m =

the computer itself.

What do you want to do with this camera ?

Reply to
PFC

MikeWhy wrote: (snip on aliasing and imaging)

Bayer aliasing and sampling (spatial) frequency are exactly related to Nyquist aliasing, however the OP was asking about the time domain signal coming out of a CCD array. That signal has already been sampled and Nyquist should not be a consideration. (Unless one is sampling the CCD output at a lower frequency.) The OP didn't explain the optical system at all, so I can't say if that is a concern or not.

-- glen

Reply to
glen herrmannsfeldt

Thanks guys, all the suggestions and explanations have been very helpful!

One more question regarding the FIFO inside the FPGA. I am planning to use two 12 bit ADC (4 diff inputs in total) sampling at 40 MHz. Now that means I will have 120 KBytes (960 Kbits) of data per frame to store before I transfer it over the bus at low rate. It seems like a FIFO is the best way to buffer this data and transfer it with a slower clock later or maybe even four different FIFO for each channel (240 Kbits each).

Xilinx Spartan-3 XC3S4000 has a total of 1,728Kbits (enough for 960 Kbits per frame) of block RAM (4 RAM columns, 24 RAM blocks per column and 18,432 bits per block RAM) that I guess I can use for FIFOs ?

I would like a FIFO of size (20K x 12 bits) =3D 240 Kbits. Can I just instantiate that using the CoreGeneator ?

Not sure if I understand all this right ... the XC3S4000 is little bigger for what I need to do in terms of logic ... but then again it seems like the only one with enough block ram for the FIFOs unless I am misunderstanding something. Please advise ...

Thanks !

Reply to
ertw

Xilinx Spartan-3 XC3S4000 has a total of 1,728Kbits (enough for 960 Kbits per frame) of block RAM (4 RAM columns, 24 RAM blocks per column and 18,432 bits per block RAM) that I guess I can use for FIFOs ?

I would like a FIFO of size (20K x 12 bits) = 240 Kbits. Can I just instantiate that using the CoreGeneator ?

Not sure if I understand all this right ... the XC3S4000 is little bigger for what I need to do in terms of logic ... but then again it seems like the only one with enough block ram for the FIFOs unless I am misunderstanding something. Please advise ...

========== Yikes. The 4000 is a bit big if that's all you're doing. I guess this is the fun part of the job, and I wouldn't dream of depriving you of it. :) The choices are to slow it down; store it off chip; or suck it up and get the big chip for its block ram. I like using an overly large chip least, but only you know the constraints of why so fast and what's possible. 3 ns DDR2 SDRAM is pretty cheap these days ($10 single quantity for 16Mx16 bit).

So, I take it the device doesn't exist yet?

Reply to
MikeWhy

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.