Spectre / Meltdown

Let me see ... I can write an executable program that I can put into a webpage, and when a user visits the webpage my program will run natively on their machine with all the rights and privileges of the current user ... how could that go wrong?

It was compete insanity on Microsoft's part from the word "go", and should never have been released. It was, however, quite typical of the way that MS thought only about enabling technologies and never about moderating or policing those technologies.

If it's not safe it's not brilliant. Quite the opposite.

.. and MS didn't "beat" Netscape, not really. Firefox is still with us and it's Chrome that's growing most in market share.

--
Cheers, 
 Daniel.
Reply to
Daniel James
Loading thread data ...

... and force-feeding the most dangerous intrusions :(

--
W J G
Reply to
Folderol

"Daniel James" wrote

| Let me see ... I can write an executable program that I can put into a | webpage, and when a user visits the webpage my program will run | natively on their machine with all the rights and privileges of the | current user ... how could that go wrong? |

So you disable all javascript, I presume? Or do you just log on as a lackey user who has no access to the Internet?

| | > For years it was a brilliant design. It still is. It's just not | > safe. | | If it's not safe it's not brilliant. Quite the opposite.

No, not if you understand COM. It allows for script and other non-compiled code to call compiled libraries, which are registered on the system. It works very well. Before .Net, Java, Flash, etc there was COM providing relatively easy and safe wrapper components.

I use it a lot in HTAs and sometimes in compiled software. It's a clever design and allows for standardized function calls. No functions with 10 parameters where 4 are callbacks. Mostly it's simple dispatch object model.

The only problem was that the Internet became unsafe. When they came out with ActiveX in webpages that security issue was not foreseen. But COM/ActiveX is still a brilliant, flexible design today, as long as it's used offline.

| .. and MS didn't "beat" Netscape, not really. Firefox is still with us | and it's Chrome that's growing most in market share. |

I'm going to give you the benefit of the doubt and assume you've been hanging around the barbecue, drinking, for most of this Labor Day Saturday.

MS built IE in with Active Desktop in '98. For a number of years, Macs were all but gone, IE had over 90% share, Netscape was maintained by AOL for awhile, if I remember correctly, then was released as an OSS code base. It was awhile before Firefox got off the ground. At that time there was no Chrome. You may not remember it, but around 2000 there was pretty much just IE on Windows.

Today, IE/Edge are pretty much kaput, despite that MS is trying hard to force Chrome/Edge. But Netscape is long, long gone.

ActiveX, 2 scripting options, and catering to corporate sysadmins, as well as building IE into Windows, made Netscape an impossible proposition. Actually, though, I switched to Netscape in 2000 and never went back. IE5 was moving like molasses. I couldn't figure out why. That was always the one big problem with IE: Tying it to system libraries made it unstable and unpredictable.

Reply to
Mayayana

Yep.

Indeed because the first thing the OS group did with the Netscape source code was replace the rendering engine, then they replaced the JavaScript subsystem by which time there was pretty much nothing left of Netscape's code.

If you were on Windows. Otherwise you wound up hitting the problem that web sites were starting to get built with no checking other than being viewed with IE on Windows and tweaked until the advertising manager was happy.

Indeed, however Firefox and derivatives are still alive and kicking.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:\>WIN                                     | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

That surprises me, my memory from circa 2003 was that C# numerical stuff ran at about the same speed as C++, my memory is poor to exact details, I would have probably have categorized 80% as the same speed. I would have probably tested using some type of numerical integration routine.

I don't know how you tested, I do remember there was a performance penalty gotcha from switching between managed and unmanaged code.

I will try to do a test of .net core in the near future, particularly on the rPi.

Reply to
Pancho

Just managed code.

We did have a mono project that had a huge problem with garbage collection (either of the gc's) when mixing managed and unmanaged code, as unmanaged data got stuck in the heap causing it to fragment and continually grow. It ended up being slower and more prone to falling over than the Python prototype. RK will remember that one.

---druck

Reply to
druck

It's really hard to concoct a fair test. JIT-compiled code will always suffer a small speed penalty for the time it takes to do the JIT compilation, but may be faster than compiled C code thereafter because the JIT compiler is able to make some whole-program optimizations and is able to target the actual processor in use rather than generating generic x86 or AMD64 (or whatever) code.

The longer a piece of JIT-compiled code is run, the less significant the cost of compilation becomes.

Native code compilation systems are getting smarter -- It is possible to perform some whole-program optimization at the link stage, for example. The best we can say is that JITted code will always have some penalty for the JIT stage, but may also gain some advantage from being able to perform more specific optimizations. I think you would have to be running something very time-critical for the difference to matter.

That will always be the case, as long as there needs to be a switch between the managed and unmanaged runtime environments.

--
Cheers, 
 Daniel.
Reply to
Daniel James

Javascript and ActiveX are completely different propositions. Javascript programs are executed within an environment defined by the browser, and are sandboxed so that they do not have access to the native OS (bugs notwithstanding).

ActiveX is a native-code program running without any isolation from the underlying OS, and it can in principle do *anything*.

So, no, I don't disable all javascript, though I do use browser plug-ins to block most of it (and, yes, I never run anything online as root -- that's just crazy).

Relatively easy, but not safe. The problem isn't how well it works, it's how little one can control it.

COM leaves too big a footprint on the system, for my taste. It's too tied in to the registry, and its use makes it hard to develop truly portable applications.

We don't have Labor Day, here.

I don't remember clearly ... but in 2000 I was probably using the Mozilla suite on Windows. I was already dabbling with Linux then, but I was using Windows as my main OS. I have an image (from 2005) of the Windows 2000 PC I was using then set up as a VM on this machine, and I can see that Seamonkey is the default browser on that. That's a few years after your arbitrary choice of 2000, and Mozilla Suite had been renamed in the meantime.

ActiveX had very little to do with it! MSIE won the battle partly because of MS Office's integration with IE that was important in the corporate environment, and that did depend on ActiveX, but mostly because it was supplied with Windows on every home PC, and most users looked no further.

Anyway, MS may have won the battle, but it lost the war.

--
Cheers, 
 Daniel.
Reply to
Daniel James

I?d prefer not to l-)

--
https://www.greenend.org.uk/rjk/
Reply to
Richard Kettlewell

?bugs notwithstanding? hides rather a lot of detail - isolation turns out to be harder in practice than previously envisaged, with Spectre in particular being a good example of the challenges.

Secondly the legitimate API surface available to JavaScript keeps on growing. It still can?t do everything, but the gap keeps shrinking.

Finally as more functionality moves to web applications, the less the distinction matters. For example the fact that JavaScript can?t access your local files isn?t very relevant if the stuff you care about is in Google Docs.

--
https://www.greenend.org.uk/rjk/
Reply to
Richard Kettlewell

"Pancho" wrote

| | That surprises me, my memory from circa 2003 was that C# numerical stuff | ran at about the same speed as C

Why would you only care about that, though? What if you have a C# program that just doing something like FTP? It takes seconds to get off the ground, loads hundreds of MB of slop, then has to call into the framework to access the system DLLs. (It can use native code, but that's another issue.) All that to make a couple of calls to a server and download a file, which will be done through a bulky series of wrappers.

That's a real world scenario. If you're writing software that does something like sharpening routines on large images then the math speed will be critical. But otherwise it's like saying your Ford Focus rolls downhill as fast as a Ferrari. Sure. But how long did it take to get up the hill?

Reply to
Mayayana

"Daniel James" wrote

| It's really hard to concoct a fair test. JIT-compiled code will always | suffer a small speed penalty for the time it takes to do the JIT | compilation, but may be faster than compiled C code thereafter because | the JIT compiler is able to make some whole-program optimizations and | is able to target the actual processor in use rather than generating | generic x86 or AMD64 (or whatever) code. |

That's optimistic. But it still doesn't account for the fact that in most scenarios it's also hobbled by wrappers. If you want the convenience, RAD, and safety of .Net then you're not using direct system calls. Every object reference is going to cost you. You're calling to the framework, which then has to call the system file. .Net is not Win32 API. It's an additional layer in most cases.

Reply to
Mayayana

"Daniel James" wrote

| Javascript and ActiveX are completely different propositions.

Not in the browser. Both are excutable code. Both are risk factors. Nearly every online attack requires javascript. Many take advantage of networking vulnerabilities, like remote desktop, dcom, etc. But all of those require javascript. The problem is not the tool. It's the fact that you're running executable code in the browser.

ActiveX is also sandboxed. It's only marked safe for scripting if it does no system access. But things happen. For instance, one time I think there were forged certs from Microsoft that were letting malicious ActiveX load. It's the same with javascript. For example, jquery has had problems in the past. Is the current version completely safe? Sure, probably. :)

People thinking like you is exactly why we have such a problem. You'd like to think the problem was all caused by something that we've eliminated. So now we can shop, bank, and do other things online with abandon, so long as we have the latest browser. It doesn't work that way. Most online attacks are now 0-days and typically bypass user restrictions. Your tax dollars fund the NSA to find the very best possible hacks. :) In addition, there are increasigly clever attacks server-side, stealing your personal info from that site you shopped at.

| ActiveX is a native-code program running without any isolation from the | underlying OS, and it can in principle do *anything*. | No. See above. There are restriction settings and certificates. I'm not saying ActiveX was safe. I'm just saying it's as safe and as idiotic as allowing javascript in the browser. The only difference is that you can't buy your plane tickets hard disks online without javascript, so you choose to take the ostrich approach and believe that it's "sandboxed".

| I never run anything online as | root -- that's just crazy).

Good. Then you're safe from all those attacks that don't get around user restrictions. The ones that no longer exist.

| > Before .Net, Java, Flash, etc there was COM | > providing relatively easy and safe wrapper components. | | Relatively easy, but not safe. The problem isn't how well it works, | it's how little one can control it.

I was referring to the functionality. Java and .Net serve to separate the programmer from direct system access, for convenience and safety. COM is similar. None of them are safe for online use, but COM is integral to Windows and provides easy object-model wrappers for many things. I've written all sorts of utilities in IE, as HTAs. An HTA is just a webpage with no security that can only run locally. Using COM functionality and an IE GUI it's amazing what one can do. I've even written an image editor and scanner interface. Whatever MS provides can be accessed.

| | > But COM/ActiveX is still a brilliant, flexible design today, as | > long as it's used offline. | | COM leaves too big a footprint on the system, for my taste. It's too | tied in to the registry, and its use makes it hard to develop truly | portable applications.

It's not tied in except to look up the typelib and DLL location when it loads. Yes, it's not portable. Nothing really is. And if you mostly just use one system, that doesn't matter. I don't prefer COM for compiled code if I can use system calls. Just as I wouldn't use Java or .Net. COM is not nearly so bloated, but it's still a wrapper that will slow things down and create dependencies. Nevertheless, in certain scenarios, such as locally run scruipted utilities, it's wonderful.

| > I'm going to give you the benefit of the doubt and | > assume you've been hanging around the barbecue, drinking, | > for most of this Labor Day Saturday. | | We don't have Labor Day, here. |

Ah. I forget. Ireland? Labor Day here is a holiday when Americans grille beef and "hot dogs", get drunk, then crash our boats.

| I don't remember clearly ... but in 2000 I was probably using the | Mozilla suite on Windows. I was already dabbling with Linux then, but I | was using Windows as my main OS. I have an image (from 2005) of the | Windows 2000 PC I was using then set up as a VM on this machine, and I | can see that Seamonkey is the default browser on that. That's a few | years after your arbitrary choice of 2000, and Mozilla Suite had been | renamed in the meantime. |

I think you must be mistaken. According to wikipedia, the first Seamonkey release was 2006. I also first tried Linux around 99/2000. Red Hat 4. Mandrake 4. Also BeOS. Interesting stuff. But then I got them all set up and realized there was no software. And BeOS was only black and white display. That was enough of that. I tried Linux a couple of times again. Still no software. Still impossible to use without console windows. Still no easy-to-use firewall that could block outgoing. Someday, maybe. Those don't seem like unreasonable demands to me.

| Anyway, MS may have won the battle, but it lost the war.

That's what we Yanks refer to as "sour grapes". :)

Reply to
Mayayana

"Richard Kettlewell" wrote

| Secondly the legitimate API surface available to JavaScript keeps on | growing. It still can't do everything, but the gap keeps shrinking. | | Finally as more functionality moves to web applications, the less the | distinction matters. For example the fact that JavaScript can't access | your local files isn't very relevant if the stuff you care about is in | Google Docs. |

Yes. And now there's WebAssembly. It's clear that Google and others hope to turn everyone's computer into a kiosk interface to access web services. To a great extent, cellphones are already that.

Reply to
Mayayana

OK, that could explain the different experience. I wouldn't have been doing intensive memory allocation. For numerical analysis computational stuff I tend to allocate memory in blocks at the start and avoid small heap objects. You can use a pool if small heap objects are absolutely necessary.

Reply to
Pancho

It is what I was paid to do.

You pick a language to suit the task. First you test to see if a language is suitable for a type of task, which is why I was benchmarking C#.

I use bash scripts as well as C#.

Reply to
Pancho

Oh if only more people did this instead of using the hammer they know and making it serve as a micrometer.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:\>WIN                                     | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

Wise words.

If quick development time but speed of execution isn't crucial, then Python is ideal as it has pre-exiting modules for just about everything you want to do, and is very easy to string things together.

If you need a bit more performance, but still want to benefit from a large number of built in collection types, and develop fairly clean understandable code, C# is a good choice.

If performance is vital, then you have to put up with the clunkiness of the STL and use C++, although C++11 has onwards had a few things easier. Just ignore the wackier stuff (that goes for C# after V4 too).

---druck

Reply to
druck

It's coming full circle, back to the model of the '60s and '70s where users used terminals to access centralized systems.

--
/~\  Charlie Gibbs                  |  Microsoft is a dictatorship. 
\ /        |  Apple is a cult. 
 X   I'm really at ac.dekanfrus     |  Linux is anarchy. 
/ \  if you read it the right way.  |  Pick your poison.
Reply to
Charlie Gibbs

On 8 Sep 2020 21:10:38 GMT, Charlie Gibbs declaimed the following:

It's the third coming... The second was the era of the X-server terminal which handled display functions for graphical client programs running on the mainframe(s).

--
	Wulfraed                 Dennis Lee Bieber         AF6VN 
	wlfraed@ix.netcom.com    http://wlfraed.microdiversity.freeddns.org/
Reply to
Dennis Lee Bieber

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.