The bullshit web

All true.

What C/C++ weenies often don't appreciate is how much C/C++ is a pessimising language - because the compiler can't /statically/ prove that optimisations are possible.

As an example of how bad C/C++ is, consider the unexpected findings of the Dynamo compiler from HPLabs.`

1 Take optimised C running on a PARISC machine M: performance X. 2 Now /emulate/ M running on M, running that C code: performance significantly worse than X, of course. 3 Instrument the executing C code using the same techniques in Java runtimes. 4 Optimise the binaries based on that new runtime knowledge. 5 Now /emulate/ M running the newly optimised knowledge: performance is similar to X, give or take.

In other words, C/C++ pessimisation is equivalent to interpreting/emulating the hardware in software.

Alternatively, Java-like runtime optimisations can turn -O2 code into -O4 code - without the nasal demons that often inhabit that territory!

Reply to
Tom Gardner
Loading thread data ...

Win 10 runs like a dog on a "budget" laptop ($500-800) that's fairly well-specced (i5 or i7 mobile processor, 12 gigs RAM) otherwise aside from the cost-cutting 5400 RPM 1TB drive they often put in 'em. Whatever they're doing with the aging NTFS file system in that OS spinning rust cripples it badly.

Linux distros like Ubuntu and Mint, and Win 8 don't seem to have the same struggles.

On my desktop that I also run 10 on I've just strapped four 500 gig 7200 RPM drives in RAID 0, that works pretty good. Don't really need redundancy there my work files are small enough that a 64 gig memory stick and the cloud work fine as the extra two copies for 3 point.

Reply to
bitrex

I might have a dog of a laptop, but it became a sleek greyhound after a I fitted an SSD.

I wouldn't now touch anything running Windows using spinning media.

--
Mike Perkins 
Video Solutions Ltd 
 Click to see the full signature
Reply to
Mike Perkins

Don't forget all their fake news as well.

formatting link

--
This message may be freely reproduced without limit or charge only via  
the Usenet protocol. Reproduction in whole or part through other  
 Click to see the full signature
Reply to
Cursitor Doom

10 is a big pig of an OS built on a 25 year old antiquated file system, kind of remarkable it works as well as it does.

Xubuntu is very snappy even on a 5400 RPM drive with a small solid-state cache, I use that OS 90% of the time on my work laptop so I've been putting off fitting an SSD on it. Good quality 1TB SSDs still aren't particularly cheap.

Reply to
bitrex

Tricky on a laptop but for a desktop I use an SSD for the OS and a large spinning drive for the data.

480GB SSDs are now affordable. We're almost there!
--
Mike Perkins 
Video Solutions Ltd 
 Click to see the full signature
Reply to
Mike Perkins

My work laptop has a 500G SSD and 1TB rotating. This one has a 500GB SSD and 200GB micro SD cards. Micro SD is a cheap way to go for things like documentation and stuff that's not used much. I even put

200GB SDs in our phones and tablets so I don't have to worry about running out of space for Netflix. ;-)
Reply to
krw

Well, for Windoze 10, if you don't like the default NTFS, there's always ReFS (resilient file system): "How to use Resilient File System (ReFS) on Windows 10"

This week, I'm running Linux Mint 19 with Mate desktop, mostly on old Core2Duo machines that were last licensed for XP, and where I didn't want to pay for a later Windoze OS.

After a few problems with bottom of the line SSD drives, I've been using Samsung 850 EVO 120/250/500 GB drives. Zero failures and very few failures from bad sectors over about 20 drives resold. Lately, I've switched to Samsung 860 EVO 250/500/1000 GB drives. About the same performance, but allegedly longer drive life. We shall see.

This might help with the price watching. Samsung 860 EVO 500GB prices on Amazon:

1TB: and 2TB:

The big spikes are there for an odd reason. When vendors go on vacation, run out of stock, or trash their credit, they don't want to make any sales. Instead of removing their listing, they simply jack up the price so high that no sane buyer would pay the price. When their situation returns to normal and they have inventory, they drop the price back less astronomical levels. That works fine for them, but makes a mess for price tracking web sites.

--
Jeff Liebermann     jeffl@cruzio.com 
150 Felker St #D    http://www.LearnByDestroying.com 
 Click to see the full signature
Reply to
Jeff Liebermann

I agree. Once you've run Win 7 and above on an SSD, you won't want to go back to spinning memory. My guess is 3x to 5x faster on most machines. Where you run into problems is the SATA interface to the drive. All SSD drives with a SATA interface can do SATA III speeds (6 Gigabits/sec) but few laptops can do the same. Most decent laptops will at SATA II (3 Gbits/sec). Older laptops are stuck at SATA I (1.5 Gbits/sec). For example, one of the better old laptops is the IBM Thinkpad T61p. I have a customer that wanted to extend the life of an office full of these laptops by installing an SSD drive. Performance was unimpressive because of the slow SATA I interface speed. Fortunately, I was later able to dramatically improve the speed to SATA II with a BIOS transplant: So, be careful with laptop SATA speeds and maximum memory when buying old used laptops.

--
Jeff Liebermann     jeffl@cruzio.com 
150 Felker St #D    http://www.LearnByDestroying.com 
 Click to see the full signature
Reply to
Jeff Liebermann

John Larkin is also upset that they do warts-and-all reports on Donald Trump, rather than reporting his lies as if they were fact.

John doesn't like processing complicated content, especially when it doesn't lime up with what he wants to hear.

--
Bill Sloman, Sydney
Reply to
bill.sloman

Indeed. If the compiler must assume you meant what you wrote -- that the output must follow the semantics as given, not just the overall functionality -- then it has no choice but to leave in all the instructions and framing to allow that generality.

Heh, interesting way to put it. :)

You can also do something similar with link-time optimization, which I think is something like, optimise the final binary, in and of itself, functionally, without regard to semantics. In other words, don't expect to be able to link its pieces into other binaries, as a library.

At least, that's what it looks like, on AVR (GCC 8.1.0), and given my limited perspective from what little programming I do. Without link-time, the output (even with -O4) looks really inefficient, making obvious redundant register writes, using pointers where immediates would be more efficient, and the like. Or with -Os, it's still inefficient, having saved little in size (the run time and file size probably only vary +/-10 or 20% between O2, 3, 4 and s). I'm not sure but I think link-time opt plays with the ABI as well.

Tim

--
Seven Transistor Labs, LLC 
Electrical Engineering Consultation and Design 
 Click to see the full signature
Reply to
Tim Williams

And I have my doubts about the majority of developers' ability to get all the declaration "decorations" and compiler flags right. Doubly so if they are using several binary libraries.

The key point in the Java HotSpot optimisation is that it instruments what the code/data is *actually doing* on that machine, and optimises the shit out of the hot paths. And continually monitors that and readjusts.

The code runs more slowly at the beginning, and significantly faster (I've measured 3*) after a while. Very useful, particularly for server-side code.

Reply to
Tom Gardner

Tom Gardner opiniated:

The more I read from you and Tim, the more I get the impression you are no programmer.

Tim: THIS is a supercomputer:

formatting link

Tom: Tim Tom Tim Tom, I see a rhythm anyways I had to look up 'pessimisation'. This discussion says it all, and hopefully answer your what's it, 'endeavor', ???

formatting link

Sigh show us some code you wrote.

Good C code is very hard to optimize, in a way where it is worth the time. Bad code in any language can better be rewritten by somebody who understands the hardware and where the bottle necks are.

Better do coding than having these silly discussions. And publish it!

Reply to
<698839253X6D445TD

Was just testing if you were paying attention ;-) But DID you google for web browser written in java, and DID you test the speed?

Cannot decode that sentence, try again.

Cannot decode that sentence, try again.

Old saying: If you have nothing to say do not say it.

Show us some code you wrote. Oh, I know, it is top secret.

Reply to
<698839253X6D445TD

This discussion reminds me of "The story of Mel, a real programmer", an amusing account of someone who refused to adopt these new-fangled assemblers and compilers.

Jeroen Belleman

Reply to
Jeroen Belleman

Jeroen Belleman

1) Right I used binary with dip switches to program EPROMS. Then very quickly you care about efficient code. And I still write loads of embedded asm. And much of it is open source, you can look at it, improve it??? HAHAHA. 2) Looking for the God particle yet an other monster accelerator is being constructed. It does create jobs, and creates an illusion of scientific progress. Industry benefits. The only sane useful thing that ever came out of CERN was html. ITER it will never break even, same story. ISS going nowhere same story. 3) My stuff works, and given what runs here on a non-super-computah, that is really something. And I still have spare processor cycles.

So whatshallIwrite now to get loadfactor 100?

Am I serious?

You do it, and publish it.

Reply to
<698839253X6D445TD

I propose that a big fast computer in a data centre be used to render the webpage into pixels, and then deliver that to my device as a .png image that gets loaded into my web browser. Then, if I click on something, the location of my click gets sent back to the server which can then render a .png image of whatever page that link went to. That way, most of the memory wastage, network bandwidth consumption, CPU power wastage, security vulnerabilies, and much of the snooping/tracking and other hassles get pushed onto the server. Very few webpages these days are well enough designed that they load less data over the network than would be present in a .png bitmap of the resulting web browser window.

I wonder how much people would pay for the privilege of using such a web browser on a big server - in my case it would be more than zero. Of course if this ever took off, then eventually the website designers would get so lazy and incompetent that nothing less than a big server could possibly run the web browser that renders their pages.

Reply to
Chris Jones

formatting link

Reply to
Lasse Langwadt Christensen

That idea will work, is already implemented. But they want to put cookies on your browser, know what site you come from, where you are, who you are, how much you can spend, basically make a model of you, and then sell the model to advertisers, foreign agents, anybody who pays. Like facebook does.

Reply to
<698839253X6D445TD

Sure.

THIS is a supercomputer, too:

formatting link

People walk around with more memory and comparable computing power than it, and significantly more is available in desktop GPUs.

How is either one different, other than scale, from what people have in their pockets? Unusual topologies like data flow computers notwithstanding. In any case, they're usually built with commodity CPUs, so when scaled down, they really aren't much different.

Anyway, the challenges of programming those, is largely a matter of logistics. Getting the data where it needs to be, when it needs to be. CPU cycles are still basically for free.

If logistics is your benchmark for programming ability, no, I haven't programmed one of those. I don't particularly care to, either, it's not my interest. If it is of yours, good for you.

You said you've contributed to open projects, which ones?

It must not be very "Good" if it's hard to optimize..?

Consider: if your benchmark is portability, then C code that is very tightly coupled to one platform (and therefore hard to optimize further through manual or automatic means) probably stinks on another.

Optimization is all about knowing your target.

If you aren't writing for any particular goal, just make it general, simple, easy to read. If you're optimizing for execution time, do that. Or code size, or whatever.

You haven't said what benchmarks you're interested in, so I can't comment any more specifically than that.

On the upside, those high level languages with real time profiling, don't much care which platform they run on, because they can't make any such assumption -- it's rather more like "f*ck it, we're doing it live!"

Tim

--
Seven Transistor Labs, LLC 
Electrical Engineering Consultation and Design 
 Click to see the full signature
Reply to
Tim Williams

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.