What is the problem with this ?

These days people are mostly using binary[digital] processors and me too, they are great, but I'm bored with them.

I'm interested in designing the circuitry,with out the concept of digital, I mean there are only 1's and no 0's

The main part ALU of cpu is responsible for all arithmetic calculations.

If I want to design the ALU from scratch with out the concept of binary, instead only with single signal, what are the difficulties one has to face.? for example see this Eg: 5+5=10, instead of that can I go with this technique

11111 + 111111 = 1111111111

Thanks.

Reply to
junee
Loading thread data ...

Interesting idea, but it is a falacy in fact. Not having a '1' is can be represented as having '0'. What you are doing is replacing the '0' by a blanck space but it the concept of '0' is still there.

Reply to
paas

where did I said I'll use '0's

lets see another example

6/2=3D3, instead of that 111111 / 111 =3D 11

got it..?

Thanks. what about other ppl ? any ideas from u

Reply to
junee

where did I said, 'I'll use 0's'

let's see another example of arithmetic calculation.

6/2=3D3, instead of that 111111 / 11 =3D111 like this

got it..?

thanks

Reply to
junee

First, getting somebody to make you a prototype. ;>)

Sure, but five and six are eleven, not ten. ;>)

How are you going to represent fractions?

Large numbers will cause you trouble too. "Place" systems (binary, octal, decimal, etc) allow large numbers to be represented by fewer digits but you would apparently be forced to represent large numbers directly. That is, to represent the value "one million" requires in decimal seven digits, in binary 21 digits, but in your "placeless" system one million digits.

How are you going to store the numbers? Binary circuits use registers with "places" corresponding to the values of the digits; the first digit is "ones", the next "twos", the next "fours" and so on, with ones or zeroes in each "place" to indicate whether there are any of the digit in question. The values of two registers are compared according to some rule and the result stored in another register.

Your system, to do the same range of numerical calculations as a binary system, will require 47619.047... times more memory since each register will have to be that much larger in order to store the same value as the binary register can.

You're welcome, if you consider the foregoing "helpful".

Mark L. Fergerson

Reply to
Alien8752

--
We stopped using stones to count with a long time ago, ;)

JF
Reply to
John Fields

Oh thanks.....for the clarifications

where I'm doing mistake.

you mean with the representation system of large numbers?

or some other .

please clarify here.

thanks

Reply to
junee

It would be more efficient to use a binary number to represent the number of '1's in the unary number. For example, 11111 could be represented by 0101 = 0x05, and 1111111111 = 1010 = 0x0A. That way you'd only need a relatively small number of bits (eg. 79 bits to represent, say, Avogardro's number) rather than the somewhat impractical 6.02E23 which would otherwise be required.

Reply to
Spehro Pefhany

Binary stones.

Reply to
John Doe

i.e means I don't have any other choices,

I've to follow, what people are following now.

Do u people know any other numbering systems.?

Thanks

Reply to
junee

I knew that studying calculus was a waste of time!

PN2222A

Reply to
PN2222A

Some early computers used BCD, with a cluster of 4 bits representing 0 through 9, but big numbers stored and manipulated in decimal. The bit codings were 8-4-2-1, but sometimes 4-2-2-1. There were even floating-point decimal machines, made with tubes, which boggles my tiny mind.

IBM also, for some strange reason, used "star code" in parts of some machines, like the 1401, where each decimal digit 0..9 was represented by two bits set out of five. There are, I think, exactly 10 such codes.

Many modern CPUs support packed BCD, at various levels. I think there is a real or proposed IEEE decimal math standard.

Some things to google!

John

Reply to
John Larkin

BTW, I have heard many times of the idea of asynchronous CPU. So the results of operations are not synchronized to a clock, but propagate further at the natural speed. The synchronization is done by delay matching at the critical points. Ideally, that should work faster then the clocked logic; perhaps the variance of the delays kills the idea.

VLV

Reply to
Vladimir Vassilevsky

Intel CPUs use some "chunks" of asynchronous logic for, e.g., instruction decoders, but what I've heard the presenters of papers on this topic stress is that their goal is usually power reduction much moreso than speed.

It seems that there should be a textbook of asynchronous logic design out there by now... besides going over the usual discussion of how you avoid race conditions with your min terms/max terms, it'd also discuss the various clever schemes people have come up with to do handshaking between multiple asynchronous modules, perhaps discuss various historical results (like the Hennessy & Patterson book does... when I took a class using it in college years ago, the professor was pretty darned good so typically there "meat" of H&P was just review anyway -- and it's not like the math was hard --, but I always looked forward to their end--of-chapter "real life examples" discussions), etc.

---Joel

Reply to
Joel Koltner

In article , snipped-for-privacy@gmail.com says...>

Skyduck? Is that you?

Reply to
krw

Naaaah! He doesn't have enough toes ;-)

...Jim Thompson

--
| James E.Thompson, P.E.                           |    mens     |
| Analog Innovations, Inc.                         |     et      |
| Analog/Mixed-Signal ASIC\'s and Discrete Systems  |    manus    |
| Phoenix, Arizona  85048    Skype: Contacts Only  |             |
| Voice:(480)460-2350  Fax: Available upon request |  Brass Rat  |
| E-mail Icon at http://www.analog-innovations.com |    1962     |
             
 I love to cook with wine     Sometimes I even put it in the food
Reply to
Jim Thompson

  1. For communication issues, how would you clock your data out. Most communication with the outside world happens by timing or transition. meaning: the duration of time that a signal is at a particular level or transition of 0 to 1 or 1 to 0. Or even internally, how would you shift your data of all 1's thru the ALU?
  2. By only using all Logic 1 bits, you've dramatically increased the needed bits to represent numbers. Your math stated 11111 + 11111 = 111111111... Compare that to the common binary 0101 + 0101 = 1010. Your answer is more than twice the number of bits to represent the same answer that binary puts in 4 bits.
  3. By your math example you are using 10 bits to represent 10 values where as those same 10bits in binary would be 1024 values (zero inclusive). Imagine the problem handling a simple multplication of 10 * 10 = 100. Your example would need 100 bits. Whereas binary only needs 7 bits to represent 100 = 1100100.
  4. What would be the data borders... I mean in Binary, we have bit(1), nibble(4), byte(8), word(16), Dword(32) and 64bit(ummm.. 64). how could you tell when one data "denomination" starts and stops. How could you sync getting a constant stream of 1's.. What would be the representation of On/Off, True/False, Yes/No?
  5. Even with the size of hard drives today, how could you store any sizable amount of (usable) data using only 1's....(refer back to your original math example.). The amount of 1's that you'd have to store to represent the number 100 or
1,000,000... you'd use nearly a meg of disk space just to store the number 1,000,000 which in binary is 11110100001001000000 = 20bits = 1 word + 1 nibble = 2 bytes + 1 nibble. How would the program be stored using all 1's? And how much disk space, time and memory would it take to read, write, decode and process a data steam of all 1's.... a DC signal.
  1. Even given the fast processors of today, How could you expect to process any real amount of (usable) data given point #5. I mean forget using Fourier Transforms or even representing Pi to more than 2 decimal places. Both for speed and data space used. Your system would crash onto itself just for all the reading and writing it would have to do. Even if it could communicate to memory or disk subsystems using only ones.
  2. An All-Logic-1 system would be a pure DC computer. Every State always on. No modulation, no level shifts, nothing to phase compare.... Just a current sink Totally impractical and improbable. it's like having a light switch with both positions being 'On'... causes high electric bills and loss of valuable sleep. ie: waste of money, waste of time.

--

--------------------------------- --- -- - Posted with NewsLeecher v3.8 Final Web @

formatting link

------------------- ----- ---- -- -

Reply to
Anonymous

I thought somebody some day would re-discover Analogue!! A-Men brother!

Hardy

Reply to
HardySpicer

This is a form of code and it is very in-efficient indeed! Don't go there! The beauty of any number system is that it has weighted digits. This means larger numbers don't need huge numbers of digits. You could try base 4 perhaps if you could figure out the electronics.

Hardy

Reply to
HardySpicer

We just use them on Obama's supporters now! :)

formatting link
"

Reply to
Jamie

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.