50 cores

If you keep some virtual addressing, you can make the code in each core not need to know where its address space actually lives in the grand scheme of things. Instead of an address map per task, you'd have one per core. This would only need to apply to the portion of memory that is shared. The private memory would not be virtual.

when you do a fork(), a big bunch of stuff needs to fly from one core's private space to the new one.

Reply to
MooseFET
Loading thread data ...

On a sunny day (Fri, 11 Jun 2010 07:01:41 -0700) it happened John Larkin wrote in :

Are you referring to 'virtual memory"?

formatting link
But then you are back to the stone age MSDOS. It is virtual memory that makes 'unlimited' (say 'limited by disk space only' computing possible. the reason I moved from MSDOS (or DRDOS rather) to Linux!

Reply to
Jan Panteltje

computing possible.

Virtual is the prime cause of code bloat. And it lets people ignore elegant design of data structures in favor of mindless thrashing.

When IBM announced virtual for the s/360, they predicted paging ratios of 200:1. A survey done a few years later found a typical value of

1.2.

John

Reply to
John Larkin

Windows will always crash, and will always be slow, no matter how much technology is available. PDP-11 and VAX timeshare systems ran, with a good fraction of hostile users, for months between power fails. We have a Linux server that's been up for something like 9 months now.

If you think of multicore as a way to get speed through parallelism, it will always be tough. If you are willing to waste flipflops to make a system brutally reliable, multicore is the way to go.

I don't want more speed. I want reliability.

John

Reply to
John Larkin

Aye! And stability!

Sick and tired of upgrading, Jeroen Belleman

Reply to
Jeroen Belleman

In a virtual memory system, the disk space (exe files and page file) _is_ the actual memory.

Think about the main memory RAM just as one level of cache between the disk and the internal CPU caches.

Reply to
Paul Keinanen

On a sunny day (Fri, 11 Jun 2010 17:29:58 +0200) it happened Jeroen Belleman wrote in :

Just run Linux, I am still running the first grml on the server. Only reason to upgrade packages is for example a new flash player or browser that 'closes security holes'. Not that anyone ever made it into this system that I know about. I have upgraded kernel a few times because of new hardware and drivers. But if you use the system 'as is' then there is no reason to ever change software. And Linud uses all those things J.L. is so against, while he confirms his Linux PC has an uptime of 9 month,. So I fail to see his arguments.

Reply to
Jan Panteltje

that

software.

Silicon realities are going to give us chips with lots of, maybe hundreds or thousands, of cores. We might reconsider OS design when cores are cheap and plentiful.

It will certainly change embedded systems when we have lots of cores. No RTOS as currently defined.

John

Reply to
John Larkin

But I do! Obligatory upgrades are the rule of the house here. Every upgrade is for security reasons, so we are told. Every login dialog here has a phrase saying "Logging in means you agree with the computing rules", which spell this out.

I'm a heretic. I contest. But I'm only me.

Granted, these rules have been drawn up by users of Micro$oft's products, for whom this is the stark truth.

Jeroen Belleman

Reply to
Jeroen Belleman

On a sunny day (Fri, 11 Jun 2010 12:25:45 -0700) it happened John Larkin wrote in :

that

software.

OK :-) I guess we can argue for years about this. Let's just wait and see. I wait for the 300 GHz singe core :-) x86 compatible, so I do not have to change anything :-)

Reply to
Jan Panteltje

execution.

in=20

It has been a while since i have seen a cogent thought on the subject expressed.

Reply to
JosephKK

execution.

in

some

This would result in fork used only by fairly lightweight manager processes.

Reply to
JosephKK

.

So you re-invented parts of merge sort, almost cool.

Reply to
JosephKK

[...]

For some tasks, we need both. A big part of gaining speed through multiple cores will be in picking what tasks can be spread out over many. Another issue is to make sure that code that lives on the far side of a cache knows when to do a bus locked operation and when it is safe not to.

If you are doing a non-blocking table of pointers to a bunch of structures, you can malloc a chunk of ram for the structure and assume that nobody else will write into it while you make the structure. When you go to add the structure to the shared list of pointers, however, you had better use a "compare and swap" for that update to make sure that nobody else does the write to the same cell.

I can also imagine a serious increase in speed in the running of Spice. Since most subcircuits only have a few ports, a core could do the work of all the internal nodes and then interact with the others to exchange data.

Reply to
MooseFET

There is also clone() which needs the two tasks to continue to share the same space. A fork() doesn't get the same data space and code space should never ever be written at this level so I think it works out for the best.

Reply to
MooseFET

No it wasn't the merge sort. It used the method as part of how it worked. The main thing it did was do a low amount of disk I/O while keeping the number of file handles low and not making more that 4 files. The latter requirement was because the directory system got very slow as the number of files created grew.

Reply to
MooseFET

Replace "file handle" by "1/2 inch magnetic tape drive" and the description is quite similar to the early days of data processing. Typically magnetic tape drives were the only large mass storage devices and the core memory was so small that you could only afford one record buffer (typically the 80 column punched card image) for each tape drive. Algorithms were fine tuned based on the number of tape drives you had and how many operators were available to load new

2400 feet tape reals to the drives :-).
Reply to
Paul Keinanen

I have a tape drive that dates from that era. 1200 BPI used to be considered a lot of data. The sort program is from well after that era. In some ways, the reel-to reel tape related programs had to be a lot simpler and smaller. With the disk one, the cost of a "rewind" was modest. This means that you would consider doing that. On the tape one, the logic was mostly about which tape to take the next record from.

Way back in the tape drive days, seismic systems wrote their data onto tape. They servoed the speed of the drive so that the ADC output would be matched so that ony a very little buffering was needed. For marine surveys, the length of time a ship could stay at sea was set by the number of tapes the ship could carry a crew of perhaps 3 people would spend all day and night changing the tapes. The machine would cycle between 2 or 3 drives.

When they got back to land, the tapes would be taken to the processing center where the giant computers lived.

Reply to
MooseFET

I have never used 1200 BPI tapes, but at least the 800 BPI systems were extremely picky about keeping the read/write heads perpendicular to the tape motion.

With multiple writing drives, you really had to keep the heads aligned the same way on all drives, unless you wanted to realign the read head after each tape :-)

At 1600 BPI, each tape channel was individually clocked, so there was not so much need to keep the R/W head aligned.

Reply to
Paul Keinanen

There were lots of odd sorts of drives in the early days:

There were also the drives called "incremental" drives where the tape was moved by a stepper. Each bit group was recorded on command.

There are "analog" drives that where basically audio tape recorders. Some of these were FM where the signal was modulated onto a carrier. Some used other modulations.

There were tapes with strange numbers of tracks like 13 and 25.

Reply to
MooseFET

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.