How long does it take to fill up an array prior to sorting?

Most sorting algorithms I've noticed seem to have an interface somewhat lik e this:

void someAlgorithm ( elemType[] elements);

So to implement this algorithm an application needs to fill the {elements} array, call the {someAlgorithm()} algorithm, and then read out the (sorted) elements of {elements}. For an {elements} object that contains n {elemType }s, how long does it take to fill that array? Is the most efficient way to implement a loop like this:

elemType[] elems = new elemType[ n]; int ix; for (ix = 0; ix < n; ix++) { elems[ ix] = produceElem(); } someAlgorithm( elems);

and then is the most efficient way to read the results of the array somethi ng like this:

for (ix = 0; ix < n; ix++) { consumeElem( elems[ ix]); }

This would amount to, for each element, incrementing {ix}, comparing it to {n}, and then actually sending the produced {elemType} to the right positio n in the array (for filling the array), and then, for each element, increme nting {ix}, comparing it to {n}, and then reading the {elemType} in the cor responding position in the array (for emptying the array). So n+n+n = 3n instructions for each stage, and each instruction requires at least one clo ck cycle to fetch the instruction, at least one clock cycle to interpret th e instruction, and at least one clock cycle to execute the instruction. So that's at least 9n clock cycles to fill the array and at least 9n clock cyc les to empty it after calling {someAlgorithm()}.

I've heard that some scientists have found ways to quickly move large block s of data from one place on disk to another. Does that speed this process u p, make it less than the 9n clock cycles I've postulated above?

I've got my eye on sorting of large amounts of data that's actually done in the industry. Do organizations that on a regular basis sort large amounts of data typically take 9n clock cycles to fill up the array, or is there a quicker way to do it that's in general use?

Reply to
Kevin Simonson
Loading thread data ...

Can I assume, due to your ".edu" address, that you are a student?

You are mixing up a large number of things here, and making a range of completely unwarranted and wrong assumptions. Your descriptions of the problems you are considering are so vague as to be pointless - you are asking how long is a piece of string. And you are posting to a group that is almost totally and utterly unrelated to the question.

So the obvious response here would be to recommend you go to your classes and listen, rather than sleep at the back, that you read the books and do the work - then you'll learn. But it is also possible that you are working hard and are simply coming up with questions and ideas that are way ahead of what you have learned. I'd still recommend learning to walk before worrying about how to run. However, I can give you a couple of hints.

Timings in processors cannot be counted in cycles like this unless you are talking about very simple and limited processors, or very specialised ones. There are endless complications of super-scaler scheduling, memory delays, and other issues that mean the throughput can easily be an order of magnitude slower (or sometimes faster) than such a simple cycle count would suggest.

Timings of loop handling are generally irrelevant (by orders of magnitude) compared to the time to create, copy or handle the elements.

Filling or handling an array of data scales as O(n) in the size of the data. Sorting, for the most part, scales as O(n log n). For big arrays, that means the sorting is the cost, not reading or writing the array. (In practice, memory speeds, cache usage, disk/IO access, etc., are hugely important - but they too affect the sorting part more than the copying part.)

Reply to
David Brown

David Brown: "Can I assume, due to your '.edu' address, that you are a stud ent?"

Close. I've been admitted into the Senior Citizen Audit Program at Utah Val ley University. I turn 62 in September, so I will start classes this Fall. I could just ask my questions in class, but my curiousity is killing me, an d it would be nice if I could get it resolved as soon as possible.

My background is actually in Computer Science; I got a BS degree in 1987 an d an MS degree in 1994, both at the University of Washington. At this late date my interests are turning to Electrical Engineering; hence my attempt t o take classes at UVU.

"And you are posting to a group that is almost totally and utterly unrelate d to the question."

Can you tell me which group I should be posting this to?

"Filling or handling an array of data scales as O(n) in the size of the dat a. Sorting, for the most part, scales as O(n log n)."

That's true for single threaded sorters. It turns out that massively parall el quicksort can get it down to O( (log n)^2). But quicksort still requires the array to be totally filled before the algorithm starts, doesn't it? So it seems the amount of time it takes to enter the elements into the array becomes relevant.

Reply to
Kevin Simonson

You are never too old to be a student! What sort of courses are you doing that raise questions like these?

If you have a background in computer science, then surely you know already that counting cycles like that is completely meaningless in most situations?

Have you any experience with Usenet? You are using google's interface - that's okay for searching archives, but an extremely poor interface for posting or reading regularly. Amongst other things, it fails to follow or encourage Usenet standards for posting, quoting and attributing. And these days it suffers more from google's idiotic "throw out the baby with the bathwater" attitude to censorship - they let people use their own interface to post spam, conspiracy theories, abuse, etc., and then they block the entire group because it has been spreading such posts. (I'm not anti-google at all - merely anti-stupid.)

If you want to use Usenet, get an account at a proper newserver (news.eternal-september.org is easy and free), and a proper newsreader (Thunderbird is fine if you don't have any other preferences).

comp.arch.fpga is about programmable logic. Yes, some people here might make cpu designs, but it is not about software algorithms.

comp.lang.c++ is about C++ (I'm guessing that's what you are using here, but the snippets are not long enough to be sure). But that's more about the language than algorithms.

comp.programming is a general programming group. It is dominated by a couple of unpleasant characters who make huge numbers of nonsense posts

- so you need to filter them out. The others in the group don't start threads very often, but there are plenty who are ready to contribute to interesting threads started by others. It is worth a try.

But it might well be that there are other places better suited to answering your questions these days. I like the Usenet format, and think it is far more efficient than web-based forums. But the fact is that there are orders of magnitude more people using things like Stack Exchange than there are on Usenet.

There are a wide variety of sorting algorithms, with different types of parallelism and different trade-offs, limitations and strengths. And this is combined with an even wider variety of misconceptions, misunderstandings, and a failure to distinguish theoretical calculations from practical reality.

A parallel sorting algorithm run on N processors cannot achieve greater

like are upper bounds when there is no limit to the number of processors. /Sometimes/ you have a sorting task where you can use massive arrays of simple elements with the aim of minimal latency or maximum throughput when sorting large numbers of small arrays - but that is going to be very rare.

In reality, sorting comes down to the speed of the data I/O - the memory, and the caches. Algorithms that are cache-friendly beat those that are not, even if they look a little slower in the theoretical calculations. And they are /always/ slower than O(n), even when you have data suitable for a bucket sort, at least when you have sizes big enough for the efficiency to be important.

Reply to
David Brown

ike this:

} array, call the {someAlgorithm()} algorithm, and then read out the (sorte d) elements of {elements}. For an {elements} object that contains n {elemTy pe}s, how long does it take to fill that array? Is the most efficient way t o implement a loop like this:

hing like this:

o {n}, and then actually sending the produced {elemType} to the right posit ion in the array (for filling the array), and then, for each element, incre menting {ix}, comparing it to {n}, and then reading the {elemType} in the c orresponding position in the array (for emptying the array). So n+n+n = 3 n instructions for each stage, and each instruction requires at least one c lock cycle to fetch the instruction, at least one clock cycle to interpret the instruction, and at least one clock cycle to execute the instruction. S o that's at least 9n clock cycles to fill the array and at least 9n clock c ycles to empty it after calling {someAlgorithm()}.

A couple of things, your breakdown of the code timing is not accurate for a ny particular CPU or computer. These things vary a great deal with many ty pes of optimization in software and hardware.

But more importantly, why do you care about this particular part of the pro cess? Sorting algorithms are typically dependent on data size by some larg e factor such as O(x^2) although it's been a long time since I've looked an d some might be better at O(x log(x)) or similar. Still, for any large dat a set that is going to dominate the operation time. Then there is the issu e of the data being stored on mass storage for loading the memory array whi ch will again swamp the sorting time unless the data size is very large.

This reminds me that many of these sorting algorithms were written before h ard drives were invented and they were working from magnetic tape!

cks of data from one place on disk to another. Does that speed this process up, make it less than the 9n clock cycles I've postulated above?

Disk and "quick" do not go together. The way to quickly move data on disk is to not move it but change the pointer to it.

in the industry. Do organizations that on a regular basis sort large amount s of data typically take 9n clock cycles to fill up the array, or is there a quicker way to do it that's in general use?

For large amounts of data no one cares about 9n clock cycles since it will take many, many more to do the sort. At least that's what I recall. Like I said, it's been a long time.

BTW, this post was made via Google Groups even if it did make my fingers bl eed.

--

Rick C. 

- Get 1,000 miles of free Supercharging 
- Tesla referral code - https://ts.la/richard11209
Reply to
gnuarm.deletethisbit

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.