Combinatorial Division?

Frist a little bit of information before my question. I don't know where the best place is for this question. I'm building a 64bit ALU using standard TTL devices. I made a 4bit adder with fast carry, combined 4 of those with 16 AND gates (ends up as a 4x4=8bit multiplier), and then combined 64 groups to provide 15 partial products to a wallace tree (which I also had to make a model for). A final summing adder takes the two partial products from the wallace tree and adds them together. This final summing adder is built using 181 and

182 TTL devices, so I can also subtract and preform basic logic operations if necessary.

I basically followed the datasheet from Texas Instruments from 1975. ;) I had to make models of the 74274 and 74275 because they weren't included as standard Altera macros I guess (which is very understandable... ;) )

Anyway, my schematic program produces an EDIF netlist which I'm able to import into Altera's software and compile and simulate my schematic for their FPGA devices. I've been simulating my project with great success. So we are on topic with the whole FPGA thing. :)

Now on to my question. Is there a simple combinatorial design for division? So far the whole schematic is made using "Combinatorial logic"? I'm not sure that's the right word. The ALU can perform any function without clock inputs, its also faster than a lot of the other methods I've found. For example, calculating one partial product at a time for multiplication.

For more info on the multiplication circuit I've described you can look at the datasheet below. On page 7-398 through 7-400 is a schematic for a 16x16 bit multiplier. Mine is pretty much the same, except mine is

4x larger. This is the first time I've used D size schematic layout. :) For more informatio

formatting link

Any hints for what I should be looking for or links would be great. Searching for combinatorial division I found what looked like to be some good hits, but the website was in the CGI error mood. :(

Reply to
logjam
Loading thread data ...

No that I've ever heard of. If there was, everyone would be using it.

All of the conventional division algorithms (hardware or software) are sequential, producing one or more bits of result per cycle. For one bit per cycle, a simple shift-and-subtract method is easy. Beyond that, typically a small ROM is involved.

Some high-speed dividers work by instead multiplying by the reciprocal, presumbaly because a high-speed reciprocal unit is a either easier to build or faster than a high-speed divider. But it still only gets you a few bits of your result every cycle.

There are a lot of books on computer arithmetic, and a lot of published papers.

Reply to
Eric Smith

Sure there is, one can do a division by cascading stages together without using registers and clock. However it a) is really, really, slow for propagation delay. And b) uses a lot of hardware (2 to 3 times as much as for multiplication). Since division is rarely used most designers seem to go with a clocked divider.

Why not use use the built in multiply operation of most HDL's ? One can usually code something like a = b * c, and it will generate an optimal design for any given architecture.

snipped-for-privacy@birdcomputer.ca

Reply to
Robert Finch

I want the ability to build the whole computer using TTL logic, but also put it in an FGPA. I'm learning VHDL as I go. Since the code is generated from my TTL schematic, I can test the giant circuit before I produce a PCB and solder hundreds of chips.

Just a thought, but wouldn't the delay using cascading stages without a clock take just as much time as if you used a clock? Instead of using the same stage over and over again its just duplicated? I think what I will do is use the 64bit ALU that supports subtraction and addition, throw in two shift registers, and a state machine to control timing.

Reply to
logjam

The thing is though, that this method will use an exorbitant amount of hardware. This will also result in a long path and will be very slow due to propogation delay. On top of that it will depend on your logic family, I believe the 74F series is the fastest? Anyway the TTL chips will either be fairly slow or consume power like crazy. Given the amount of chips you will need I wouldnt be surprised if the current draw of your divider alone hit into the amp range. A clocked division (using the shift and subtract method) is fairly simple to implement in an FSM. Not only that you will be a bit a clock, and all you need is a some registers, some combinatorial logic to determine the next state and the D FF's to hold the current state. That will be easier for you to solder and consume far less power too! A remainder in a few clock cycles I think isnt too much of a compromise.

Reply to
Isaac Bosompem

The whole ALU with multiplier will take around 2.5 amps. :)

Reply to
logjam

WOW, ~13W ALU!!!

Why not use an FSM based divider?

Reply to
Isaac Bosompem

Logjam, it is hard for a rader of this newsgroup to understand why you are doing what you are doing. Why a combinatorial divider, when division is a rare operation, and sequential circuits are more efficient? And why insist on a 30-year old technology? If you had picked a 50-year old technology, you would use Germanium transistors, diodes, resistors, and capacitors, and you would really learn the very details of circuit design. (I did, it was fun while there was nothing better available!) Or a 20-year old technology, using AMD bit-slice (2900) chips? Or a 10-year-old original FPGA technology (XC3000), where all logic is implemented in LUTs and flip-flops?

If you absolutely want to make life tough for yourself, what is so special about 1975-v> Frist a little bit of information before my question. I don't know

Reply to
Peter Alfke

I want to do all sorts of things. After this I want to build an 8bit computer using transistors. I saw one on the internet, but it didn't look like it ran any decent software.

I have no idea if I will ever actually build it. Right now I'm learning a lot about FPGAs and binary math, so its worth it just for that. :)

I want a combinational multiplication circuit becase its a ton faster than sequential as far as I've been able to figure out. I don't mind if the schematic contains tons of parts and takes up 1.5 square feet of board. ;) But I'm not going to do division with that many parts.

I guess the reason I came up with 75 is because that's the year of the Altair and other computers becoming "popular"? It will be fun to see what kind of computer I can design out of commonly available parts, what it would have cost, and how fast it is. So far my initial calculations are 4-4.5 million 32x32bit multiplications per second. That is pretty fast, comparable to a 50MHz 486.

For a while I wanted to build a computer out of relays, but I'm not that brave yet. That's a LOT of time. :)

I've just completed the soldering on a 19,008 LED display. Talk about current, the thing draws 130A! So, yes...I am crazy. :)

formatting link

formatting link

Reply to
logjam

On a sunny day (23 Feb 2006 19:27:04 -0800) it happened "Peter Alfke" wrote in :

Yes Peter, way to go, use the best of teh old and new technology:

formatting link

Reply to
Jan Panteltje

Peter Alfke wrote: [snip]

What, no vacuum tubes? I miss the sound of a mag-drum spinning up - now that's a *real* system :-)

Reply to
David R Brooks

Have you considered tinker toys? IIRC the Boston Computer Museum has a computer built out of tinker toys that is hardwired (hardsticked?) for playing tic tac toe.

I don't see any drive electronics. Do they all just come on at the same time, like a giant green lamp? If they were wired in series, you could power them from a lightning rod.

Reply to
Jeff Cunningham

Out of interest, I saw an old LED matrix based device for stores to advertise things and just took a peek inside under the LED matrix and low and behold a ton of 74LS164. Serial In Parallel Out. Unfortunately the store was still using the device so I couldn't open it up for more investigation :(

Reply to
Isaac Bosompem

The display is organized as 11x1728. When I had the boards made I thought I could handle 33 rows per refresh, but after doing duty cycle tests on the LEDs I switched to 11. 1728 bits are loaded into 216 addressable flip flops, then the data is transfered into a second set of storage registers which drives the display. There are only 11 columns to refresh. I organized the display this way because 8x11 is a very nice font. The first thing I want to do is program a pong game for the Altair. After all, the Altair and other computers of its time were known for blinking lights. I figure this is the next evolution of blinking lights. :) At 90ma per LED and a maximum number of LEDs on at any given time of 1728, well, you do the math. :)

Reply to
logjam

The most interesting thing I've read yet is the simple fact that the first relay computers could have been built as early as 1890. Maybe we'd be running around playing StarGate, or worse...Battlestar Galactica... :)

Reply to
logjam

Way to go :) Excellent way to gain grounding experience, as nobody will pay you to build one today!! The oldie moldies here are likely to enjoy gabbing about the past too.

Ignore Peter, and those that are completely clueless about what drives hobbiests. They are the ones that are clueless for working only on projects that yield 8 digit dollar signs.

I kept one transistor computer, actually the only small one I had worked with ... the others were much too large to pack around, especially the NCR 315 which was five rack bays plus the CRAM unit and a console. It's a rare desktop PDP-8 lab machine with the smoked plexiglass covers. If you wanted to build a transistor machine I would suggest an original PDP-8 design as it's well documented and there is lots of PDP-8 software. I would do it based on the original module design, as the boards are pretty inexpensive if done as panels, and I would use a pcb backplane instead of wirewrapping it as the original was done. I wish I had kept one drum and tube computer too ... especially a single rack one like a Bendix G15. Anybody have a G15 they want to send to a good home?

There are a number of people now doing retro designs both in TTL with microprocessors, and in FPGA's for similar reasons. It's fun, good learning experience, and yields an excellent grounding in low level design and assembly processes. There are a number of folks doing this for both old home computers and early video games.

Not crazy .... DRIVEN! ... something lacking in many current college graduates this day that went into engineer for the pay check, and not the love of technology. If you are half way local to Colorado, look me up. Otherwise email, there are a few of us that still love technology for the sake of technology, rather than just a paycheck.

My home project has been taking a few thousand older FPGA's to build a home super computer into a desktop sized box (with water cooling, and a lot of memory, fibre channel disks, etc) .... :) It takes a lot of current too .... a few thousand amps for even a "small" super computer.

Reply to
fpga_toys

The answer is that there is no combinatorial divider that makes sense. Division is done by successive approximation. The simplest algorithm for division is a one bit per cycle operation. There is also a two bit per cycle algorithm that's pretty efficient, thats what I used in the mini-computers that I designed in the 70s and early 80s. If you have high speed multipliers, which modern FPGAs have in abundance, then the highest performance algorithm is a Newton-Raphson convergence algorithm. There are similar algorithms for square root. The clearest explanation of how to do a convergence divide is in the Cray 1 manual, if you do a Google search you should be able to find a copy on the web.

Reply to
Josh Rosen

Actually I was more impressed by the Bryant drums in the CDC (and I believe the Burroughs too) that were in Dr. Dicky's CalPoly collection, continuing to spin long after cutting power to the rack. It's been some

30+ years, but I'm still impressed with it's size, and the long rows of heads.

I did keep a "little" drum, about a half cubic foot from a military computer that got scrapped at UCLA. And I also kept a 31 inch Data Products disk platter and electromagnetic linear stepper actuator with little air bellows for head loading from an XDS940 that got scrapped from Tymshare in the 70's. It's been on the wall of my office for 25 years, and I love taking "tape viewer" out and showing people the bits on the surface, which are easily visible by eye and a small glass.

The travesty is that some clueless folks trashed Dr. Dicky's historical lab, which was a wonderful collection of WORKING machines from the late

50's and early 60's. I spent several months getting a badly damaged NCR315 working for him, that was probably the most valuable experience of my life. Another student/faculty member had dropped a screw driver on the power rails with the system live, shorting the negative and postive power rails, and took out nearly every germanium diode and transistor in the machine, which took a couple weeks of soldering to repair enough of the boards. Bringing the machine back up from that horrible accident was a grounding in low level computer/machine design that became an experience of a lifetime. The sequencer for that design, was a large array of magnetic cores along the top of the bays, where the cores were plused in sequence, and secondary loops from each core controlled all the major FF's and data paths for the processor design. Microcoding by wires and pulse transformers.
Reply to
fpga_toys

In my defense (I am thin-skinned) the original posting did not tell the whole story. If I had known that this is a hobbyist with a historical interest, my answer would have been far more friendly. I was very much involved in the TTL era (at Fairchild 1969 to '72), inolved in the launching of many MSI circuits, and I wrote most of the "Fairchild TTL Applications Handbook" in 1973 So I am not "completely clueless" as -toys insinuates. Peter Alfke, from home.

Reply to
Peter Alfke

I'm not sure that long hand division is "successive approximation", as it's a clear and precise algorithm when applied by computers, and does yield a combinatorial divider that is only slightly more complex than a similar multiplier design derived from long hand multiplication.

The design for such a divider is identical to a similar multiplier, but with subtraction and a mux at each stage to keep the original value if a barrow occurs. The "solution" is in the mux selectors for the quotient, and the ending value at the end of the subtractors is the remainder.

This was fairly fast, and easy to implement as a bit serial/parallel design I did a while back for a distributed arithmetic problem.

I suspect, but didn't figure it out, that there is a Booth Recoding analog for such a divider design. You can use the top bits of the remainer to select the shift amount, which cuts the cycles in half for a serial design, so I didn't worry it that hard.

Reply to
fpga_toys

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.