The router here directs a few ports to the honeypot, and the honeypot has no services running on those ports. Mostly an exercise in curiosity. 21, 22, 23, 110, 1433, 3306, 8086, 10000, and 12345

What are you using to scan your sshd log? Just curious. I've tried fail2ban and sshguard, and ended up composing a homebrew that taps into syslog-ng, which gives a way to direct selected messages to the "homebrew log watcher." End result is similar, an iptables ban is inserted.

The Windows clients access the zfs storage via samba, and that has it's own smbusers. Other Linux machines access the zfs storage via nfs.

Can samba access any root owned files? I try to keep samba restricted to one directory, but others make the whole machine accessible. Maybe the malware got in via Windows and samba?

If your users only belong to their own group they can't do much. Maybe that's why you were web surfing as root? I have fired the browser up as root, but only to access my modem, not the internet.

My apologies for intruding. I am in no way an expert. Listen to Neddy, he is.

No worries.

Not sure how likely a Window vector for a Linux infection would be. It's not unknown for a vector and infection to address different platforms. StucksNet is an example. It'd be easier to just hack a Windows machine with canned code.

I have not had a Windows install at home since early 2002, when I dumped dual boot Windows NT and Red Hat for Gentoo.
I've never used Samba, so can't comment on the possibility it was the attack vector._________________Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.

I still find it hard to believe adobe-flash was the vector, because at minimal I would also have been affected (but not as root - however regular user or root, encrypted files are encrypted files and they'd make money off of it either way.).

I have pretty much the most lax firewall policy on my main "server" - it's completely open to the outside world. I do block off a few ports like samba, cups, and portmap/rpcbind, and so far so good. I have pretty much everything else open like dns, http, sshd, imaps, sendmail, openvpn, etc. open to the world.

Indeed it is quite true that it depends on the site visited, but I can't say that I'm anything abnormal in randomly clicking sites, including sites that may be of questionable content. Granted I do not tend to click on things not in English, that would probably be the only exception, but knowing that these ransomers tend to choose USA citizens and assume English...

I do however heed Firefox warnings and update flash fairly quickly. It may make a difference, I don't know._________________Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSDWhat am I supposed watching?

If flash was the vector, it depends on the website you visit, if you get attacked or not.

My approach so far has been to use a separate limited user account for general websurfing, separate limited user account for emails, separate limited user account for documents etc.
Each user requiring network access must be in a network_user_group with relevant application started with sg network_user_group otherwise the firewall blocks access. Websurfing user has browser ports only open, email user has imap/smtp/pop3 ports only open so click on email links fails.

I'm hoping this strategy will limit damage to a users home area, but since I move anything I want to keep to somewhere network access users either have no or read only access to - encryption should be limited to things I can do without.

Well, if you do that, you should be able to easily segregate the vector... but what vectors have you run across and can we learn from that?

Then the other side of the coin, if it's impossible to get infected, then was the effort spent worth the time?_________________Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSDWhat am I supposed watching?

Not to sidetrack the thread too much, my "stuck at" ftp wasn't that I couldn't conjure iptables rules, it was along the lines of "if the point is to lock down outgoing packets to stifle interlopers from "calling home," that point is lost when all the unprivileged ports are opened to NEW OUTPUT packets, to service active ftp.

Ahhh, that -m helper --helper ftp is something I haven't seen or used before. It appears to check the contents of the packet or similar (maybe the calling program), and only if THAT relationship meets the criteria, is the packet allowed to pass.

Thank you.

Testing showed that this alone doesn't work. The second outgoing ftp packet, the one to a non-privileged port, is not identified as RELATED, even with your suggested iptables entry. There is more than one fix. I took the easy way out.

I just read there exists a "ransomware" encrypter that encrypts your files but the private key that was used either never existed or is deleted... so even if you cough up the money, you still can't decrypt...

So anyone who even thinks about coughing up should at least get proof that they can decrypt or have the private key.

I do wonder if the wave of the future is to simply do privilege separation between every process, and make it hard/annoying for users to pass data between processes (as this is the whole point of privilege separation). Or we really need to find and fix the actual security holes, which I'd rather see..._________________Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSDWhat am I supposed watching?

I think there will be a combination of approaches, software with zero exploits is probably not achievable - at least not using methods we have at our disposal today. Something like qubes might be the approach taken in future ...

I just read there exists a "ransomware" encrypter that encrypts your files but the private key that was used either never existed or is deleted... so even if you cough up the money, you still can't decrypt...

So anyone who even thinks about coughing up should at least get proof that they can decrypt or have the private key.

I do wonder if the wave of the future is to simply do privilege separation between every process, and make it hard/annoying for users to pass data between processes (as this is the whole point of privilege separation). Or we really need to find and fix the actual security holes, which I'd rather see...

I believe that's where systemd is headed - containerizing all applications, essentially cheap-virtualizing everything. Theoretically that might so exactly the separation you're looking for, but for the next step. After containerization, I expect to see some sort of ActiveX-like thing added to systemd, so that the containers can share data. Potentially add security, then take it away. (again)_________________.sigs waste space and bandwidth

Things like android shows that even with privilege separation, one still can do lots of damage.

It seems like it's fear that we got to this state...

jonathan183 wrote:

I think there will be a combination of approaches, software with zero exploits is probably not achievable - at least not using methods we have at our disposal today. Something like qubes might be the approach taken in future ...

We need less complex software that can be proven correct... perhaps software writer responsibility up to the cost of the purchase price? Then again if the interaction between two software causes a security hole, then who's the blame (blame both!)_________________Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSDWhat am I supposed watching?

We need less complex software that can be proven correct... perhaps software writer responsibility up to the cost of the purchase price?

That's already the case here. Count how many things you have installed where the license includes a long all-caps disclaimer "no warranty, not even implied warranty that this will actually do what it says" - and see if you can find a single one that doesn't.

It seems that everyone else spanked you about running anything at all as root, and I'm sure you already covered that yourself. AFAIK there's no way out except complete reformat (delete/reinstall partition table too) and reformat. Might want to check for bios malware while you're at it.

Please understand this is NOT an "I told you so," I've been bitten by malware several times over the last 30 years. The old timers who tell you what you should have done have almost certainly been there before. That's why we've thought it through so thoroughly.

I might point out that the only backups which are safe vs ransomware are the ones which are:

Physically removed from all running hardware except during the times they're used.

Frequent enough to minimize the lost data

Complete enough to replace the data you lost

I might also point out that the best backups:

Either completely restore the device (software and all) -- complicated. Or completely restore data and custom settings, leaving software to a reinstall

Are each complete backups not relying on some prior backup

Offline media are moved to one or more physical sites not attached to the computer's site

Are NOT solely an rsync'd copy of current production. (meaning that deleted files and modified files both have an audit trail of their prior contents back through previous backups)

RAID IS NOT A BACKUP! It's an insurance policy to prevent loss of data since the most recent backup in the case of a hardware failure. It does not prevent accidental deletion or software errors.

Personally I keep my backups on removable SATA drives (I have a slot-loaded bay which holds a standard SATA drive at each site) without any compression or archiving. The directories I backup are readily searchable and readable on pretty much any linux box. My backup process is scripted but not automated, because I only attach the drive when I make a backup, and then immediately eject it.

We need less complex software that can be proven correct... perhaps software writer responsibility up to the cost of the purchase price?

That's already the case here. Count how many things you have installed where the license includes a long all-caps disclaimer "no warranty, not even implied warranty that this will actually do what it says" - and see if you can find a single one that doesn't.

As long as we didn't pay for it, it is our responsibility if it breaks. So all GNU software we get to keep the pieces if it breaks. I suppose the comment was not a solution but rather hoping that we have a chance to detect/fix it ourselves, but the complexity is a big problem, and hoping that for-pay software will indemnify our problem (FAT CHANCE! or do we need more laws...)

---

Backups are costly as well if we have to go back many, many revisions. I'm hoping that there won't later be "dormant" or "logic bomb" software that looks innocuous at first, and gets backed up for many revisions. Once again, complexity is the killer._________________Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSDWhat am I supposed watching?

We need less complex software that can be proven correct... perhaps software writer responsibility up to the cost of the purchase price?

That's already the case here. Count how many things you have installed where the license includes a long all-caps disclaimer "no warranty, not even implied warranty that this will actually do what it says" - and see if you can find a single one that doesn't.

As long as we didn't pay for it, it is our responsibility if it breaks. So all GNU software we get to keep the pieces if it breaks. I suppose the comment was not a solution but rather hoping that we have a chance to detect/fix it ourselves, but the complexity is a big problem, and hoping that for-pay software will indemnify our problem (FAT CHANCE! or do we need more laws...)

---

Backups are costly as well if we have to go back many, many revisions. I'm hoping that there won't later be "dormant" or "logic bomb" software that looks innocuous at first, and gets backed up for many revisions. Once again, complexity is the killer.

Are you guys actually going there? Even for-pay software does absolutely nothing to protect the computer from the idiots at the keyboard. The OP ran a web browser as root and went out to the unprotected Internet. That's the same thing as putting your car through the crusher and then expecting the original manufacturer's warranty to cover it. If people start suing the programmers for idiotic shit done by the users then programmers will stop writing software. Or they'll charge enough to protect them from people who do stupid shit.

This is, by the way, exactly the reason health care costs so much in the USA. Having been to other countries where a full-spine MRI costs USD $5 without insurance and you can get half a dozen cavities filled along with a crown and a root canal for USD $250 -- again without insurance -- I can assert that litigation against others for one's own mistakes can have no good long-term outcome.

However we still don't know for sure what the entry mechanism is. We know running stuff as root because we know that there may be bugs in it. This is not like driving a car in a crusher, this is like driving a car off road because the car should have been fine off road, but history as told that the manufacturer sometimes puts questionable springs and shocks on it. Without scrutinizing the car, we won't know. People who know that it's very likely that off road suspension was not installed but have no way of checking (and neither can the manufacturer), so we just drive the car on paved roads so we never lose suspension and crash because of steering failure. These are the people who don't run as root.

Running code on the native machine from within a VM is completely improper behavior... Running as root should have not been an issue, after all, VMWare needs to be run as root and you'd certainly be up in arms if running a code sequence inside the VM and suddenly your host is infected with encryption ransomware. It's not like the OP gave permission to run code that encrypts his computer. The buggy code allowed something that should have never been allowed.

I am tired of people charging money for crap software that have bugs, never mind the security bugs. It's always time to market, time to market. And people think buggy software is somehow "acceptable". No. This is bad practice and it needs to stop despite the bean counters. Can't say it's 100% their fault, it's one of these things where someone jumped, it wasn't so bad, now everyone else now needs to follow suit or be left behind.

---

Where did those other countries get the MRI machine? They could not have recuperated the cost of the machine at $5 per scan unless there perhaps was a government subsidy somewhere that you don't see. In the US someone made an investment, are only owned by a few for-profit companies, they can charge however much they want. Not an insurance issue, pure greed.

In any case, the USA is clearly a litigious society... unfortunately the underlying reason behind it is because everyone wants to be equal to everyone else. I'll leave it at that._________________Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSDWhat am I supposed watching?