camera.
Get real. You've got to read out the sensor, as well as merely exposing it. Unless you have a highly integrated CMOS imager with piles of memory on it, there is essentially only one (*) serial data path out of the device and you need to squeeze all the pixels through that. Shortening the exposure of a CCD sensor is easy; it's done all the time to control exposure time. But it is *very* hard to get the image OUT of a multi-megapixel CCD in less than a few milliseconds.
There is precisely one way in which you CAN do this: it's called "Time Delay Integration" or TDI and it's been widely used for hi-res military sensors for ages. If you can arrange that the motion is at a constant speed along the sensor's Y axis, then you can organise the sensor's vertical shift clock so that it keeps in step with the motion, and a fairly long exposure can be perfectly sharp. Brilliant for taking aerial reconnaissance photos from a 'plane in flight. But it needs lots of smarts in the motion sensing and the camera readout electronics.
If I read the OP's idea correctly, he wants to correct the picture for motion blurring after it's been exposed. This is perfectly possible if you know the exact behaviour of the motion. The OP should find out about "deconvolution". To take a simpler example: Suppose you have a stream of data, and before you get to see that data you know that it has been processed by a moving-average filter. How can you reconstruct the original data points, if you know the exact form of the moving-average filter function?
(*) Some sensors have multiple serial output paths - see Dalsa's offerings, for example. The number of paths is never more than a small handful, though. Broadside readout of the lines into local on-chip memory is the only hope.