ransomware

Getting ransomed sounds awful to me. Maybe somebody can answer these questions:

If we have a bunch of servers on an internal natwork, presumably they could all be encrypted by bad guys.

If an infected PC does not have write access to a server, can it encrypt files on it? I think not.

If a PC is on the network but is not configured to share its files, can its files be encrypted from the infected PC? Again, I think not. If not, we could have one PC that does frequent backups by reading the files on the servers, but is itself immune to being ransomed.

--
John Larkin         Highland Technology, Inc 
picosecond timing   precision measurement  
 Click to see the full signature
Reply to
John Larkin
Loading thread data ...

Assuming that the ransomware isn't good enough to break into the write- protected PC's, yes, that'll probably work. If they're smart they'll install the ransomware, have it decrypt files "under the hood" for a month or so (so you don't notice anything wrong), and then hit you with the ransom.

So something that both backs up your network often _and_ checks (somehow) that the files haven't been encrypted, would probably help.

Googling "ransomware counter-measures" will probably get more profound, cost-effective, and just plain effective ideas than anything we could cook up between us.

--
Tim Wescott 
Wescott Design Services 
 Click to see the full signature
Reply to
Tim Wescott

You speak of "write access" as if the only thing a "server" can do is export file/printer shares.

A server can provide many services -- without exporting files/printers. Malware can target those services to exploit bugs in those services and gain a beachhead on the server. Once there, it can do whatever the leveraged exploit allows it.

There is also the potential attack vector of files "carried" onto the machine and "examined" while there. E.g., copy an infected *PDF* onto the machine and "read" it (e.g., using a legitimate copy of Adobe) *on* the machine (the PDF deploys its payload and has worked around any defenses you may have had in place -- i.e., even if NOT connected to the network; you've hand-carried the payload as the network would have)

See above. Do you know which ports are open on all of your PC's? (servers and non-servers, alike) Do any of them ever have files "introduced" to them by other means (e.g., sneakernet, email attachments, etc.) and "examined" thereon?

To "read" the files, the servers need to export those shares. The "client" has to be able to see them and the machine (netbios services). Any means by which they can communicate is a POTENTIAL attack vector. And, the more exposed to the outside world they are, the more vulnerable to (e.g.) zero-day exploits -- things your AV hasn't yet figured out how to protect against.

I have two archival layers: my live repository and my archive.

The archive is my "vault". All it does is know how to read and write files. No "applications" installed on it -- so, no bugs in those applications that can be exploited by malicious payloads. It exports no services -- so you can't talk to it remotely, at your leisure, and indefinitely! It's "cold" 99+% of the time -- I fire it up when I need to copy something off of it (or add something new) then shut it down. Think of it as checking out (copying) things from a library; you take them out for the duration that you will need them. If you are only *referencing* them, then you can discard them when done (no need to "update" the archive with something that hasn't been CHANGED!). E.g., when creating a multimedia presentation, I may "check out" various LARGE audio and video "libraries" (hundreds of gigabytes that I may not want cluttering up a workstation when not needed) to assemble my presentation; then DISCARD those copies when the presentation is complete. Sort of like feeding optical media into a machine for the duration that you'll need it.

I store hashes of all of the files in a database. This allows the contents of the archive to be routinely checked to ensure they are intact. I.e., a process "walks" through the mounted filesystem (individual volumes can be offline unless needed) and computes the current hash of each file and compares it to the stored (location, filename, hash) to reassure me that the file's contents are: accessible and not altered. If a discrepancy is found, that database lets me find an alternate copy of the file so it can be restored.

[What good is an archive if you don't KNOW that it is intact?]

My repository is the live version of the archive. Things that I am working on reside here and see frequent updates. But, again, it just "reads and writes files", no applications (other than the application that implements the repository's functionality itself). This minimizes the attack surface.

And, access to its contents are through the repository application. I can't just access any old "file" like a regular exported file share. Instead, I have to ask the repository application for the file, present credentials indicating my authority to access/update it and wait for it to deliver the file to me.

The "precious" portions of the archive are stored redundantly -- and on different types of media. So, I can rebuild it (painfully) if a total catastrophic failure (house burns down).

The only place "windows" comes into play is on some of the workstations. And, none of this stuff ever talks to the outside world (it's physically impossible to do so).

It's a bit inconvenient. But, fits my workstyle quite well (there aren't multiple copies of me trying to share files with each other!)

Reply to
Don Y

Your assumptions are probably true, *if* none of the PCs involved have exploitable vulnerabilities.

Unfortunately, a lot of malware-spreading has occurred when one infected PC on a LAN is able to access vulnerable services on another PC on the LAN and "break in", bypassing the normal permissions checks.

It used to be said that if you had a Windows PC, "right out of the box" with the factory-installed version of Windows on it, and connected it to a live Internet feed so it could upgrade itself from Microsoft's servers, there was a good chance that it would be "cracked" over the Internet before it had time to download and install the security fixes which closed the vulnerabilities it was shipped with. Doing a "safe" upgrade of a new machine really required hooking it up behind a "one client only, outbound connections only" firewall, to keep it condomed off from attackers until it had time to install fixes for the most recent of vulnerabilities.

So, in principle, an infected client system on your LAN might use a "zero day" attack to compromise one of your servers and encrypt files thereon, even if the server's sharing permissions were set to deny write access by clients.

The same is true for other operating systems, of course... Linux and

*BSD and MacOS have all been vulnerable to one or another form of attack at various times. Windows is just the worst-off.
Reply to
Dave Platt

It has to get into the server first, but then it had to get into your network somehow too. It helps if the emergency backup device SSHes into the server to pull files, rather than the other way round. I use Qubes OS for that job.

Again, it has to be hacked first. You want the smallest possible attack surface--that is, not samba or NFS or.... I keep just about everything under version control, and push commits to a few locations for offsite backup. Other locations magically download commits to Qubes VMs.

Assuming it doesn't have exploitable bugs. Having different types of emergency backup system, with absolutely minimal attack surfaces, and frequent backups of my actual work onto DVD is my approach.

Version control software is cool for this, because it notices when files change.

Cheers

Phil Hobbs (In the middle of building another of those swoopy Supermicro servers, for home.)

--
Dr Philip C D Hobbs 
Principal Consultant 
 Click to see the full signature
Reply to
Phil Hobbs

The only ransomware I've seen encrypts the desktop. It will follow links as well, so if there are server links present and the server allows write access then the writable shares are encrypted.

Backup rotate backup archive.

And the ransom is not cheap, a couple of hundred bitcoin (> ~$100k)

Cheers

Reply to
Martin Riddle

Short answer. Best practice.

Ransomeware usually encrypts the directory structure. That is almost instantaneous.

Ransomeware searches for ALL connected devices.

Ransomeware waits as long as it wants before doing its dirty work.

Virus scanners may not be able to detect the signature on newly created ransomware. If I was a nasty black hat I would do just that, create a new ransomeware signature every so often.

Terabyte USB3 drives are cheap and fast.

Backup software is cheap.

Make multiple backups, one per week for several months.

It may be that the backup have the ransomware but yet unactivated.

Backup then air gap those USB backups. I pull the plug on the back of the USB drive. Currently using 5T USB3 Seagates (about $130 each at local Costco)

NAS is too slow.

I do all of that! (but on a smaller scale since I am not a great target like you)

You might consider Macrium Reflect for work and Macrium Reflect Free for home. That is what I use to make "Images" (not clones). I am not associated with these products, just a user.

Clones include hidden stuff not in the directory and bad sectors. Think about it. I could write to the "unused disk" area and store my nasty code there or even make a bad sector to hold my code. Images only include directory stuff and leave bad sectors behind. Incremental backup might be acceptable but I only do full backups. I thing Incremental backup might be vulnerable to incrementally backing up ransomeware.

Reply to
OG

I could mumble something about "use virtual machines" but really, find a pro and do what they say.

--
Les Cargill
Reply to
Les Cargill

I've only once had to "fix" a "ransomed" computer -- and it wasn't truly "held hostage" (just an altered browser start page to make it appear to be).

If all the software did was scramble the directory structure, it would be relatively easy to reclaim much of a "scrambled drive". (An "undelete" sort of tool could do it coupled with some file/OS heuristics).

Encryption takes resources -- chief among them, time (though space is also important; if the data grows during the encryption process, there is the risk that it will consume available free space -- and, therefore, become noticeable -- before completing its task).

An infection can't reliably predict which files a user will see an interest in, soon. To be effective (not detectable until after the fact), any scheme has to be able to operate, undetected, while doing its dirty work.

One approach could be to see when the machine *seems* to be idle (e.g., left running overnight, unattended). But, predicting the future based on observations of the present is a skill worth far more "in general" than a malware producer could be expected to recover by way of ransom! :>

So, instead, it must know how to operate alongside the normally operating machine -- without noticeably increasing the system load (which would make the system appear sluggish and possibly attract the attention of the user prematurely).

As such, it would have to be able to encrypt files/structures AND decrypt them "on the fly" -- in the event the user opted to access one, by chance, before the entire job had been completed. (The malware could wait until everything was encrypted and then remove the "convenience decrypter" in one fell swoop -- rendering the drive "immediately" encrypted... apparently)

This would suggest an encryption scheme that is relatively little effort to decrypt (amount of work required; says nothing DIRECTLY about the strength of the encryption).

It could also target the media itself; though if that were the case, one would imagine you'd see drive manufacturers explicitly mentioned in the cases reported.

I think there was a NAS manufacturer whose products were targeted.

By that thinking, you NEVER know if you're safe!

Which is why scanners also look for patterns of behavior: why is this process accessing this resource so often? And, in such a "random" pattern?

And how many terabyte drives do you buy to backup your terabyte drive? :>

Any time you are actively *using* the drives, they are vulnerable. Does the machine that you attach them to (EVERY machine that you attach them to) have ANY applications that do anything other than "copy bytes to/from the drives"? Then each of those applications is a potential exploit target.

For what? Malware sitting on machine A can just as easily encrypt files on a NAS "at its leisure". Of course, if the encryption is done "in-band", then you run the risk that some other user might notice the encrypted file if their client does NOT have the same malware coordinated with the first.

As many NAS's run FOSS that has been repackaged as a black box (most often with the manufacturer being 98.33342% clueless as to what all the software

*in* that repackaged FOSS bundle actually does, its vulnerabilities, etc.), then a network appliance is just as vulnerable as any other machine. Esp as most have mechanisms to allow their firmware/software to be dynamically updated. AND, are also proud to announce what they are! ("OK, I have stumbled on a Foomatic Thingamatron 3000, here. What's the proper exploit to compromise it...?")

Depends on the imaging software. Many BIOS's count on "unused" portions of disks to hold extensions to their bootstraps, etc. Leave them behind and a virgin disk would never boot -- after your "image" is installed.

On terabyte drives? :>

Reply to
Don Y

You need to read up on ransomeware.

They can encrypt a directory in less than a second. Files do not need to be encrypted to screw up the drive access.

You are assuming they are using some complex algo to do it. Not necessary, just an XOR the directory will stymie most users.

A NAS is slow compared to a dedicated USB3 drive for backing up. That is also taking into account all the other traffic that might be on the network.

Terabytes are compressed so it is not the same size.

--- news://freenews.netfront.net/ - complaints: snipped-for-privacy@netfront.net ---

Reply to
OG

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.