Most of the lurid tech headlines are not really news, but more like click bait. Most of the following are good examples.
Big ISP's are worried that tracking their customers online activities and violating their privacy might be might impact their ability to sell that information to advertisers and governments.
Yet another impact to tracking users online activities. Including Russia in this revelation provides the necessary culprit. If you dig deeper, many of the security holes, bugs, and exploits are theoretical and useful only in a very limited number of circumstances.
Consumer expert states the obvious. Of course, there's a government funded research program proposed to investigate the possibility of aversion therapy and a non-surgical cure.
Nostalgia, a sure sign that the road ahead does not look very good or profitable.
The battle of the benchmarks. Usually, these announcements are based on contrived performance tests, that highlight the performance features of whichever product paid for the benchmarking. Unequal test conditions are also possible.
The current growth market in computing is gaming. The hardware is expensive, the horsepower required is huge, the storage and bandwidth requirement are gigantic, and the realism (i.e. 4K) is gorgeous. Microsoft doesn't want to be left out of this growth market.
"MICROSOFT FLIGHT SIMULATOR - PREVIEW" (33:00)
Old news. Apple stopped allowing MAC address changes in 2018. However, there were multiple work-around. I don't follow Apple details, but I read somewhere that Apple may have reinstated the "feature" in current MacOS releases.
Sigh. This is a headline? I guess the video game site needed to manufacture something of importance.
--
Jeff Liebermann jeffl@cruzio.com
150 Felker St #D http://www.LearnByDestroying.com
Santa Cruz CA 95060 http://802.11junk.com
Skype: JeffLiebermann AE6KS 831-336-2558
For less lurid and, arguably, more important headlines, can I point people to comp.risks and its archive
formatting link
That is low volume and high signal-to-noise ratio, since it is curated by Peter Newman at SRI (
formatting link
) since *1985*, thus:
ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks) Peter G. Neumann, moderator, chmn ACM Committee on Computers and Public Policy
There's about one issue per week, and many of the contributors have a *very* solid theoretical and practical background.
Somewhat guilty as charged. I know very little about TPC benchmarks: I also managed to miss the TPC in the headline. At first glance, the TPC benchmark comparison seems valid for a database comparison, but I'm beginning to have my doubts and suspicions because of the 8 year time difference in hardware and testing. See below.
Here's the Slashdot article: I have not read the reader comments, yet.
Test results: Oracle set the record in 2011. Presumably, there's been some progress in computing architecture, hardware, and software in the last 8 years. Also in cost. 6.25 CNY (Chinese Yuan) = $0.87 USD/tpmC or about 13% cheaper to buy than Oracle running on a SPARC SuperCluster.
However, note the comment near the top of the page: Results displayed with a grey background are Historical Results, which might not be up to date with regards to pricing and/or availability of HW or SW.
It's interesting that only 5 systems from 2 manufacturers are considered current for TPC-C tests, and everything else is marked as "Historical Results". Sorted by "System Availability":
Is TPC the current gold standard for benchmarking databases, or is there something more current and up to date?
--
Jeff Liebermann jeffl@cruzio.com
150 Felker St #D http://www.LearnByDestroying.com
Santa Cruz CA 95060 http://802.11junk.com
Skype: JeffLiebermann AE6KS 831-336-2558
It's still the gold standard, but - precisely because it has been so successful - it isn't often a key criterion for selection. The use of a widely-accepted standard meant that vendors either conformed to the general ball-park of what is possible on given hardware, or they went out of business.
[... or they're MySQL, which was written and is used by people who don't know better or care for other things (the original authors didn't know what a "transaction" is).]
Anyhow, TPC is much better than the earlier tightly-focussed TPB benchmark, which basically did a bunch of credit/debit transactions.
A similar thing happened when Ken McDonell's MUSBUS group(*) benchmarked the top relational database contenders in the 1980's. They instrumented the Unix kernel so could see every I/O, lock, context switch, etc that the DBMS performed on a given load. The results were widely variate one system would use 2000 locks for a load that another did with 4, one used a hundred I/Os in random order for what could be done in ten. The group fed the results back to the vendors, and within 2 years all the surviving top five products had implemented cost-based optimisers to replace their heuristic ones, and were within about 15% or each other on all metrics. And that's even before Gray and Reuter's excellent how-to book was published in 1990 - everything since has used that material.
[Except MySQL's authors didn't know anything about this, and though some kind folk contributed a proper transactional storage manager, to this day they don't have a decent optimiser. So much easier to just start writing a million lines of code instead of, you know, read a book].
ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here.
All logos and trade names are the property of their respective owners.