How to develop a random number generation device

Nobody snipped-for-privacy@nowhere.com posted to sci.electronics.design:

I went and checked the Wikipedia definition, is basically correct. The links to extended explanatory data seem to be good as well. Your explanation does not match.

Use your own salt.

Reply to
JosephKK
Loading thread data ...

Another of Dimbulb's socks escapes from his mommy's hamper.

--
  Keith
Reply to
krw

The OS is necessary, but insufficient, part of the solution. The API is certainly part of the solution. Compilers too. Saying that the "OS can't" do something is letting it completely off the hook. Windows, or more accurately M$, *is* the problem.

--
  Keith
Reply to
krw

Yes, the OS is part of the problem/solution, but it needs hardware help. Actually, hardware/software combinations have existed at least since the late '70's. One I'm personally familiar with is the Motorola MC6809 (what a sweet chip!) running Microware's OS-9.

The 6809 had a software interrupt that could be programmed (as could all the other interrupts) to switch memory maps. A non-privileged user running under OS-9 had no access at all to the system space; the user could do any stupid thing imaginable and affect only himself. To get to system resources he had to load a register with a code and issue a SWI.

I believe a few other microprocessors had similar features (didn't the

68K?) -- I'd be very surprised if they didn't have corresponding OS's.

And, notwithstanding the empty-headed MS worshiper who keeps calling more knowledgeable people idiots, Microsoft still doesn't make use of even what Intel provides.

--
John  Larkin wrote:
> ...Microsoft\'s approach to multicore is incompatible with this
> architecture. In a few years we\'ll have, say, 1024 processors on a
> chip, and something new will be required to manage them. It will be a
> thousand times simpler and more reliable than Windows.

But, John, we already have it.  Linux is running right now on hundreds
of processors -- I don\'t remember offhand how many cores per chip, but
it\'s one of the later PowerPC processors.  I think it\'s at Livermore.

...Yes, here it is.  The whole list at livermore, with operating systems
and hardware summaries.

http://www.llnl.gov/computing/tutorials/lc_resources/

John perry
Reply to
John E. Perry

formatting link

I'm not sure about this one - it is sometimes a problem with windows that it is so dependant on the filename extensions rather than actually looking at what is in the file (compare with *nix systems, that traditionally do not use filename extensions in this way). It is precisely because the dependence on filename extensions is so ingrained in the windows system and the users' understanding that it is so easy to get people to run malware with names like "jokes.txt.exe". The default setting of "hide file extensions for known types" is one of MS's greatest gifts to malware writers.

It's also noticeable that the first version of NT, NT 3.51 (a brilliant marketing ploy), was more solid than any version since then - because the gui and non-essential device drivers (like the graphics drivers) were kept out of kernel mode. But MS couldn't make such a system work fast enough for good graphics, so with NT 4.0 the graphics system was moved into the kernel like in Win95, with a corresponding drop in stability.

Reply to
David Brown

This is essential for protection against other types of malware or attacks, such as a trojan. If a system has proper separation of access levels for programs, then a rogue program (either intentionally rogue for a trojan, or accidentally for a normal program suffering a buffer overflow, or just a good-old-fashioned bug) is limited in what it can do. Thus on a *nix system, if apache is compromised, it cannot be used to corrupt the rest of the system as it runs as user "nobody" (or something similar - details vary) and cannot write to many areas of the file system. On windows, the user control is so badly done that most people run as "administrator", so that the pathological process has full control.

It should do that too - although protection against bad processes is more important (windows should learn to walk before trying to run). A key point with protection against running data and stack segments is to help user-level rogue processes destroying data for that user.

Reply to
David Brown

On Sep 15, 10:59 am, John Larkin wrote: [...]

I'm not sure that I would call "file systems" controlling part of the Kernal's job. I would step that out one layer. It really would be a task that is given its time and access privilages by the kernal. By splitting the two, you would make it easier to design and debug both. Once you have, one file system controlling software going, You could run a second one, being debugged, controlling different media.

This was common practice in the 1970's, and even

Reply to
MooseFET

Yes, we could call that a pseudo-Harvard because it is running on the same bus and perhaps in the same chips.

Even the Z80 could be made to work as a pseudo-Harvard machine. You could tell the difference between an instruction fetch and a memory read. Because of this, just a little extra hardware let you connect more total memory space than the 64K limit.

Reply to
MooseFET

I think Bill Gates's dream was of a world where everybody's friendly, there's no aggression or hostility, sort of like a Disneyland of computing, where everybody shares all of their data with everyone, anyone can execute anything on anybody's computer anywhere - sort of like the Garden of Eden with interconnected computers.

Reality seems to have not turned out that way - there really are bad people out there! =:-O

Thanks, Rich

Reply to
Rich Grise

On Sep 15, 11:09 am, John Larkin wrote: [....]

I think that the number of virtual cores will grow faster than the number fo real cores. With extra register banks and a bit of clever design, a single ALU can look like two slightly slower ones.

I expect to see multicore machines with less actual floating point ALUs than actual integer ALUs.

Reply to
MooseFET

That's what I meant; the file system, and user GUIs and such, are just more jobs, not part of the kernal at all. VMS worked that way. File systems were loadable tasks, not part of the os proper. RSTS had multiple "runtime systems", API sets essentially, which is what the user tasks saw; some emulated other OS's.

Drivers are an intermediate case. They can be dynamically loadable, but must have hardware access and, directly or via DMA, can access all of memory.

John

Reply to
John Larkin

Except that he, and Balmer, were the most vicious and predatory SOBs in computer industry history.

John

Reply to
John Larkin

The answer is "Yes, with proper hardware support".

Why is it that for three days now, you've been resisting accepting the right answer?

Windowholic? ;-)

Thanks, Rich

Reply to
Rich Grise

Yup. Low-horsepower tasks can just be a thread on a multithread core, and many little tasks don't need a dedicated floating-point unit.

My point/fantasy is that OS design should change radically if many, many real or virtual CPUs are available. One CPU would be the manager, and every task, process, or driver could have its own, totally confined and protected, CPU, and there would be no context switching ever, and few interrupts in fact.

John

Reply to
John Larkin

Sounds sort of like Sun's Niagra chips, which have (IIRC) 8 cores, each with 4 threads, but only a few floating point units. For things like web serving, it's ideal.

That's not going to work for Linux, anyway - there is a utility thread spawned per cpu at the moment (work is underway to avoid this, because it is a bit of a pain when you have thousands of cpus in one box).

However, there is no point in having a cpu (or even a virtual cpu) dedicated to each task. Many sorts of tasks spend a lot of time sleeping while waiting for other events - a cpu in this state is a waste of resources. Multiple cpus is a good thing, and faster context switching would be a good thing too (multiple virtual cpus per real cpu is part of this), but there is little to be gained by going overboard. There comes a point when the die space used for all these extra cpus would be better spent on cache or other sorts of buffers.

Reply to
David Brown

Patch Tuesday is Microsoft's practice of accumulating a bunch of patches and releasing them on the 2nd Tuesday of each month.

formatting link

I certainly don't update that often. I like to let the patches mellow for a month or so, because Microsoft's patches are so stupid they often break more than they fix.

"The second problem affected large deployments of Windows, such as can be found at large companies. Such large deployments found it increasingly difficult to make sure all systems across the company were all up to date. The problem was made worse by the fact that, occasionally, a patch issued by Microsoft would break existing functionality, and would have to be uninstalled."

Damn, you actually *don't* know much about this stuff.

John

Reply to
John Larkin

Only if you think of a CPU as a valuable resource. As silicon shrinks, a CPU becomes a minor bit of real estate. It makes sense to use it when there's something to do, and put it to sleep when there's not. Lots of power gets saved by not doing context switches.

My point is that large numbers of CPU cores *will* become common and cheap, and we need a new type of OS to take advantage of this new reality. Done right, it could be simple and astoundingly secure and reliable.

I'd be happy to waste a little silicon if I could have an OS that doesn't crash and that doesn't go to sleep for seconds at a time for no obvious reason.

John

Reply to
John Larkin

As long as those computers have paid their Gates' tax it's his dream world. He's done everything in his power to make his dream come true too.

Windows is close to his Disneyland.

Particularly Gates.

--
  Keith
Reply to
krw

Not register banks, just a couple of bits in the rename register files.

I would think that would be more of a mess than the small amount of extra hardware for an FPU for each CPU. Asymetries can get messy fast.

--
  Keith
Reply to
krw

But the hardware is there. It's software that sucks eggs.

The same is true of any modern processor. Priveleged ops can't be executed from user space. However, this causes some performance problems so holes are drilled in the firewall (and Windows leaks out).

PowerPC has a very distinct protect mode. Some of the later ones also have a hypervisor mode to further assist in vitalization. the /360 was completely self-virtualizable (and has been done several levels deep).

Why should they?

It? Which "it"?

--
  Keith
Reply to
krw

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.