I'm writing up a project that ran from 1988 to 1991. It involved building an ECL-based digital processor that ran at 25MHz, working into a couple of ECL SRAMs. Debugging the hardware was tedious and took more than a year.
There's evidence that there was an alternative proposal which would have dropped the processing speed to 10MHz, and I suspect that engineers involved might have planned on doing the digital processing in a a re-programmable Xilinx fpga.
What I'd like to find out is whether the Xilinx parts that were availlable back then could have supported what we needed to do.
We were sampling a sequential process, at 256 discrete points.
We had two 16-bit lists of data in SRAM. One list represented the data we were collecting (Data), and the other was a 15-bit representation of what we thought we were seeing (Results).
Every 100nsec we would have got a 7-bit A/D converter output and would have had to add it to a 16-bit number pulled out of the Data S/RAM, and write the sum back into the same SRAM address (though the first version of what we built wrote the data into a second Data SRAM amd ping-ponged between two Data SRAMs on successive passes).
After we'd done enough Accumulation passes through our 256 element list of Data points, we'd make an Update pass, and up-shift the accumulated data by eight or fewer bits (depending on how many Accumulation passes we'd done - never more than 2^8 (256), and subtract the 15-bit bit Result representation of what we thought we had from the shifted accumulated data.
We could then down shift the difference by anything up to 15 bits ( depending on how reliable the Results we thought we had were) and add it back onto to our 15-bit Result representation of what we thought we had, to improve the reliability of that particular number, and write this improved number back into the Result store
Obviously, we had to back-fill the most significant bits of the down- shifted number with the sign bit of the difference (and of course that got overlooked in the first version of the ECL-based system).
In practice, 15-bits was an over-kill, and the final version of the ECL-based system settled for a maximum down-shift of 7 bits - with the arithemtic for longer accumulations being handed off to the system processor, which wasn't as fast, but didn't have to to the job often enough for it to matter..
The Updata pass could run quite a bit slower than 10MHz, but we would have liked to use a barrel-shifter, which could set up any shift from
+8 to -7 (or -15) as we did in the ECL system, rather than a shift- register shifter.Would this have been practical in Xilinx fpga's back in 1988?