How to develop a random number generation device

You define yourself by the ideas you refuse to consider. So I suppose you'll still be running Windows 20 years from now.

John

Reply to
John Larkin
Loading thread data ...

I think that hardware engineers get a better grounding in logical design (although I haven't looked at modern CS syllabuses so I may be out of date).

But it is mostly a cultural thing. Software houses view minimum time to market and first mover advantage to gain maximum market share as more important than correct functionality. And it seems they are right. Just look at Microsoft Windows vs IBMs OS/2 a triumph of superb marketting over technical excellence!

And I have bought my fair share of hardware that made it onto the market bugs and all too. My new fax machine caught fire. Early V90 modems that only half work etc.

You are treating the symptoms and not the disease. Strongly typed languages already exist that would make most of the classical errors of C/C++ programmers go away. Better tools would help in software development, but until the true cost of delivering faulty software is driven home the suits will always go for the quick buck.

Regards, Martin Brown

Reply to
Martin Brown

Halting problems are intrinsically hard. Nothing much you can do about that. So hard in fact that the hailstone numbers belong to the Collatz conjecture and are expected to terminate eventually with the repeating pattern 4 - 2 - 1 - 4 .... ad infinitum.

However no proof exists. And a brute force search out to around 2^58 failed to find any numbers that didn't.

Depends what you are working on. If a failure can have mission critical implications then people pay a lot more attention. If the screen refresh gets garbled and a few icons go missing nobody really cares.

Better languages already exist, but almost no-one uses them.

Already exists Nicklaus Wirth's minimalist language Modula-2 more or less fits the bill. Small simple language with very tightly defined module interface opaque types and low level generic IO primitives. It never really caught on. Logitech (now of mouse fame) sold commercial versions of the ETH Zurich M2 compiler for PCs in the mid 80's.

formatting link
formatting link

Have some of the history. Ada tried to be everything to all men and became bloated as a result.

No. But I was once a fan of Nassi-Schniederman diagrams which encapsulate program logic in a visual form. Sadly the graphical tools of the day were not really up to it. Sceptics called them nasty spiderman diagrams.

Regards, Martin Brown

Reply to
Martin Brown

Hardware can be spaghetti too, and can be buggy and nasty, if one does asynchronous design. But in proper synchronous design, controlled by state machines, immensely complex stuff just works. It's sort of ironic that in a big logic design, 100K gates and maybe 100 state machines, everything happens all at once, every clock, across the entire chip, and it works. Whereas with software, there's only one PC, only one thing happens, at a single location, at a time, and usually nobody can predict the actual paths, or write truly reliable code.

No, I am making the true observation that complex digital logic designs are usually bug-free, simple software systems have a chance of being so, and complex software systems never are.

John

Reply to
John Larkin

There are a few points here. If you take a "typical" embedded card with an FPGA and a large program, you'll find the software part has orders of magnitude more lines of programmer-written code than the FPGA. In an FPGA, the space is often taken with pre-written code (such as an embedded processor or other high-level macros), and much of the remaining space is taken by multiple copies of components. Although getting each line of the FPGA code right is harder than getting each line of the C/C++ right, there are less lines in total. And for various reasons (not all of which are understood), studies show that the rate of bugs in programs is roughly proportional to the number of lines, almost independent of the language and of the type of programming. Weird, but apparently true. That's part of the reason for using higher level languages like Python (or MyHDL for FPGA design) rather than C++ - not only do programmers typically code faster, they make less mistakes.

Of course, an FPGA project typically involves a lot more comprehensive testing than a typical C++ project, and is typically better planned (with less feature creep), both of which are critical to getting low bug rates.

However, you have to remember that hardware (and FPGA) and software do different jobs. Sometimes there are jobs that can be implemented well in either, but that's seldom the case. And when there is, it is generally much faster to develop a software solution. What is often missing in the software side is a commitment of time and resources to proper development and testing that would mean the development took longer, but gave a more reliable result (and thus often saves money in the long term). With FPGA design, if you don't make such a commitment, your project will never work at all - thus it is more likely that a released product is nearly bug-free.

There are certainly benefits in putting some of an OS in hardware - but the hardware can never be as flexible as software. If you want an example of a device with a hardware OS, have a look at

formatting link
(it's 68k based, so you'll like it). I've seen other cpus with OS hardware - typically it is to make task switching more predictable so that hardware devices like timers and UARTs can be simulated in software.

No, a lack of commitment to proper design strategy and testing is why software developers typically start in the middle of a project and never properly finish it. The ease of making revisions and sending out updates is part of why such a commitment is never made - managers believe it is cheaper to ship prototype software and let users do the testing.

The number of bugs is roughly proportional to the lines of code. It's the debugging and testing (or lack thereof) that is often the problem, combined with structural failures due to lack of design.

Reply to
David Brown

It didn't take me any time at all to see that this has no bounds checking; what happens if someone passes X=1.42857 to it?

This little example-oid was clearly written by one of those lame programmer-wannabees who keeps sniveling "All Software Has Bugs And There's Nothing You Can Do About It!!!" as an excuse for his incompetence/laziness.

Cheers! Rich

Reply to
Rich Grise

waste

I run windows (on desktops) and Linux (on a desktop, a laptop, and a bunch of servers, and on a fairly high-reliability automation system I am working on), and I'd use something else if I needed an OS in my embedded systems. If something better came along, I'd use that - whatever is the right tool for the job.

The relevant saying is "keep an open mind, but not so open that your brains fall out". I'm happy to accept that doing things in hardware is often more reliable than doing things in software (I work with small embedded systems - I know when reliability is important, and I know about achieving it in practical systems). But what I am not willing to accept is claims that you alone understand the way to make all computers reliable, using a hardware design that is obviously (to me, anyway) impractical, and you offer no justification beyond repeating claims that "hardware is always more reliable than software", and therefore you can practically guarantee that the future of computing will be dominated by single task per core processors.

I believe I have been open minded - I've tried to point out the problems with your ideas, and why I think it is impractical to design such chips, and why they would be impractical for general purpose computing even if they were made. I've repeatedly asked for justification for your claims, and received none of relevance. I am more than willing to discuss these ideas more if you can justify them - but until then, I'll continue to view massively multi-core chips as useful for some specialised tasks but inappropriate for general purpose (and desktop in particular) computing.

I seem to remember previous discussions reaching similar conclusions - you had a pretty way-out theory, leading to an interesting discussion but ending with me giving up in frustration, and you calling me closed-minded. These sorts of ideas are good for making people think, but scientific minds are naturally sceptical until given solid evidence and justification.

mvh.,

David

Reply to
David Brown

Thanks. I was thinking hardware.

--
  Keith
Reply to
krw

As if you'd know you ass from a hole in the ground, Dimbulb.

The Cell processor uses a rather simple PPC processor and attached processors tuned specifically for FP performance. It's not a general purpose processor. The M$ X-Box 360 uses what is essentially three of the cores to eke out its performance.

--
  Keith
Reply to
krw

If they're "somewhere" else, they have to be un/re/loaded. That takes substantial time. You're going to have to figure out which registers to un/re/load at point. Remember, if you want to switch virtual CPUs at any time, you're going to have to not only save/re/load all architected registers but renamed registers unless you plan on quiesceing/flushing the execution unit between virtual CPU switches.

More busses => more register file ports, which is worse than adding registers to the file.

A lot of things change when transistors become less expensive than the wires between them. ;-)

Pipelines lose quickly because you have to subtract (clock_jitter + setup/hold) * pipe_stages from throughput. The P-IV is a good example of this. ...about the only thing it's a good example of, other than how *not* to architect a processor.

Like many problems, start with a lookup table.

--
  Keith
Reply to
krw

In a statically-typed language, that isn't an option (unless you're seriously suggesting that X would be declared as a floating-point variable).

No, it's a simple example of something which can't easily be proven to terminate. Most static analysis tools don't even try to address non-termination.

Pointing out that some bugs can't be eliminated by static analysis isn't the same thing as suggesting that they can't be caught at all.

Having said that, most of the bugs which occur in the wild are of a kind which could easily be caught using better tools. More powerful type systems (e.g. those typically found in functional languages) would go a long way, as would design-by-contract (as in Eiffel).

Reply to
Nobody

X is an integer. Most other readers understood what I meant by the comment.

No, it is an example of a proof that you can't check your code by running it through a compiler, or static checker, no matter how clever that code may be unless you are willing to have the compiler take weeks to compile. Problems of this sort require either a new method be found or that all cases be explored.

Reply to
MooseFET

4G of RAM * 8 Bits is a lot more bits than a 100K gates. You need to keep your sizes equal if you want to make a fair comparison.
Reply to
MooseFET

4G of RAM * 8 Bits is a lot more bits than a 100K gates. You need to keep your sizes equal if you want to make a fair comparison.
Reply to
MooseFET

Yes, it may take a clock cycle to do the register swapping. Reducing the number of registers on the bus allows those clock cycles to be at a higher frequency so I think the advantage will out weigh the disadvantage. BTW: I'm assuming several CPUs and lost of sets of registers are on one chip.

I was thinking in terms of a not very pipelined CPU so that the switch over could happen in a few cycles. The registers currently being written would have to stay in place until the write finished. This is part of why I'm assuming a fairly simple CPU.

I don't see how you come to that conclusion.

Yes and when a multiply doesn't draw an amp.

Yes, pipelines don't solve everything but for some operations like the

1/sqrt(), they can make a lot of sense. The process can be broken into three steps or four if you twist things about a bit:

Step 1: Take the input number and look in a table to get an initial estimate.

Step 2: ShouldBeOne = Y * Y * X Y = Y * 0.5 * (2 - ShouldBeOne) * (1 - K1*(ShouldBeOne-1)^2)

Step 3: Don't do the 2nd order part and repeat several times.

My 32 bit -> 16 bit integer sqrt() for the 8051 doesn't use a look up table and yet is fairly quick about it. It uses two observations:

1 - The sum of the first N odd numbers is N^2 2 - If you multiply X * 4, sqrt(X) doubles and both are shifts.
Reply to
MooseFET

MooseFET snipped-for-privacy@rahul.net posted to sci.electronics.design:

May i introduce you to a concept called cyclomatic complexity. The cyclomatic complexity of 100's of interacting state machines is on the order of 10^5 to 10^6. A memory array of regular blocks of storage accessed by a regular decoder has a cyclomatic complexity of on the order of 10 to 10^2. In the memory there is much self-similarity across several orders of magnitude in size.

Reply to
JosephKK

a

each

waste

I have made no such claims.

using a hardware design that is obviously (to me, anyway)

Can't help what's obvious to you

and you offer no justification beyond repeating claims that

Isn't it?

and therefore you can

I can't guarantee it. My ideas are necessarily simplistic, and would get more complex in a real system. Like, for example, my multicore chip would probably have a core+GPU or three optimized for graphics, and maybe some crypto or compression/decompression gadgets. There's no point sacrificing performance to intellectual purity.

But the trend towards multiple cores, running multiple threads each, is a steamroller. So far, it's been along the Microsoft "big OS" model, but whan we get to scores of processors running hundreds of threads, wouldn't a different OS design start to make sense? The IBM Cell is certainly another direction.

Sorry, I missed that part. Why is it, or more significantly, why *will it* be impractical to design a chip that will contain, or act like it contains, a couple hundred CPU cores, all interfaced to a central cache?

Why? Because Windows, and other "big" OS's like Linux, don't support it?

It's generally accepted tha a microkernal-based OS will be more reliable than a macrokernal system, because of its simplicity, but the microkernal needs too many context switches to be efficient.

formatting link

So, let's get rid of the context switches by running each process in its own real or virtual (ie, multithreaded) CPU. Then nobody can crash the kernal. A little hardware protection for DMA operations makes even device drivers safe.

Deja vu, I guess.

"Scientific minds" are often remarkably ready to attack new ideas, rather than playing with, or contributing to them. I take a lot of business away from people like that.

And I'm no dreamer: I build stuff that works, and people buy it.

John

Reply to
John Larkin

So *that's* why Windows is so reliable! It's a single state machine that traverses a simple linear array of self-similar memory.

Thanks.

John

Reply to
John Larkin

The problem is that complex software systems are orders of magnitude more complex than the hardware upon which they run. And the pain in checking all the interactions between subsystems scales with N!

(10N)! >> N! for all N

It should not be a surprise that complexity (especailly when combined with bad planning and requirements creep) can kill projects stone dead. Or have them released on an unsuspecting world in a state of total disarray.

There are design methods and languages to support them that could deliver more reliable software by making it harder to write buggy code. The trouble is that too few programmers bother to learn how to do it.

Ship it and be damned it the business paradigm today (and then get paid again to put it right).

Regards, Martin Brown

Reply to
Martin Brown

So what exactly is the definition. It seems to me that just because the memory is a repeated array in physical space, it needn't be in logical space.

Reply to
MooseFET

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.