Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Weblver1 writes "A recent report by web security firm Finjan shows how easily data can be accessed on PCs by malware which circumvents existing defenses. With the use of obfuscated code, antivirus
software and static Web filters could not identify the scrambled attack code as a threat. The report walks through a real-life scenario of the infection
process step-by-step, and tracks what happens to the stolen data. This demonstrates how stealing sensitive data has become unbearably easy — especially, given the abundance of easy-to-use DIY crimeware toolkits. Finjan's report is available
here (PDF, registration required). Shortly after this report, Security firm RSA has released their findings
of a huge amount of stolen 'virtual wallets' in one of the largest discoveries of stolen data from computers compromised by the Sinowal trojan. While the trojan can be traced back to 2006, it managed to become more productive over time with frequent variants. Given the scale, ease of use, and hiding techniques making infections extremely difficult to find, no wonder today's crimeware achieves such 'impressive' results."

For users of systems that distinguish between text and binary mode
(you know who you are), add a library call that specifies binary mode
for stdout as the first statement of main(),
or use freopen("ioccc_ray.ppm", "wb", stdout) and do not use redirection.

A freely distributable command-line version of Microsoft Visual C
exhibits an optimizer bug when compiling this entry. Disable/Og for
best results.

The judges were able to figure out how to control position
(in all 3 coordinates), size, and color (to some extent) of the balls.

Selected Author's Comments:

It is possible to write some kinds of programs in C without using reserved
words. For very short and trivial programs, it usually isn't very hard to
write a variant using no reserved words, but with this program I want to
show that also non-trivial programs can be written this way. This IOCCC
entry contains no reserved words (I don't count 'main' as a reserved word,
although the compiler gives it special meaning) and no preprocessor
directives.

The program is a small ray-tracer. The first line of the source code may
be modified if you want the resulting image to be of some other resolution
than the predefined. The 'A' value is an anti-alias factor. Setting it to
1 disables the anti-aliasing feature (this makes the output look bad), but
setting it too high makes the trace take a lot more time to complete.

The ppm image can then be viewed using an image viewer of your own choice.
(Running the ray-tracer may take several minutes, even on fast machines,
so be patient.)

I am very much aware about the fact that I'm breaking the guidelines. For
example, the word 'int' is a reserved word and therefore all variable
declarations are implicit. There will no doubt be _lots_ of warnings,
no matter which compiler is used. Still, the source code should be word-
length-independent and endianess-independent.

Another reason for writing code without using reserved words is that many
text editors will make all reserved words turn BOLD when printed on
paper. Since I care for the global environment, we shouldn't waste any
more laser toner, or ink, than necessary. Everyone should write C code
with no reserved words, and our world will be a better place.

Once it has the potential to run on your system, you're basically already screwed. Antivirus companies help a little by catching the known works and viruses that have been around for a while, but in return usually slow the system down as well. As always, the only thing you can do is keep your software updated, don't run programs or code you don't trust, don't let people on your system that you don't trust to keep the system clean, and hope for the best.

Why bother with anti-virus for the system itself? (Note: anti-virus is acceptable for mail servers or file servers.)

Instead, why not focus on identifying the known good code... and quarantining anything else?

Maybe there aren't an infinite number of ways to obfuscate code (eventually your obfuscation would exceed the capacity of the local hard drive) but there are FAR more ways to obfuscate code so it bypasses the anti-virus scanners than there are bits of known good code.

I should be able to boot from some form of rescue CD with a HUGE list of filenames, checksums, etc... and what application they are associated with... and validate every single file on a workstation. And then quarantine everything else so it can be manually verified.

There, even if you get infected, the disinfection is simple AND effective.

The user will ALWAYS be the weakest link. As the article I linked to stated, if education could work, it would have worked by now.

Instead, focus on building systems that MINIMIZE the vulnerability and that make it EASY to RECOVER when it is cracked.

Quarantining code is folly.

That's your opinion. I can show that it does work.

Active and varied defenses and re-writes and restores to RO media help.

Huh? How about some specifics? Because that isn't making sense to me.

I scape so much crap from friends and relatives machines that I've got BartsCD built for most of them. I just re-write the registry after active scans, and re-write kernel, vmm, browser crap.

How do you "re-write the registry"?

Instead, imagine an anti-virus system that refuses to allow code to be installed in they system directories (or registered) unless it matches the checksums, names, etc on a list of known good apps. Then it just becomes a issue of keeping that list updated with the latest patches and upgrades.

Instead of downloading the daily list of suspected BAD patterns, you'd be downloading a list of known good patterns. And that would only need to be updated prior to something being installed on the system.

For a business looking to manage thousands of PC's... all with the same basic apps and patch levels and such... this would be so much easier than trying to maintain the current anti-virus system (engine upgrades, signature upgrades). Nothing would be installed that was not pre-approved by their department.

It could still work, on Linux. Suppose you had a program that checks the md5 of every executable file and library on the system with the distro's repository. Then creates a list of the remaining files to be confirmed manually. People writing software could simply manually mark their own software, or non-packaged software as needed.

People writing software could simply manually mark their own software, or non-packaged software as needed.

So how would malware not mark itself in the same way?

The "mark" would need to be made using something like a public-key signature system. The signature contains the path of the OK'd file, it's MDx hash (doesn't particularly matter which hash you choose), and the public key ID of the person who says it's OK, then sign it with that person's private key. The "OK" mark should be trivial to check then.In add

The "mark" would need to be made using something like a public-key signature system. [...] In addition, since you're talking about someone's within-company ID

The system you describe is very similar to the existing Authenticode system, with the company as the root CA. It would work within a sufficiently large company, which applies something like Windows group policy across a domain. But do you know any way this system could be extended to a home or home office environment? Adware and surveillanceware published by large companies routinely gets signed, and legitimate free software maintained by amateurs remains unsigned because the extra $200 per year to keep the

The system you describe is very similar to the existing Authenticode system, with the company as the root CA.

I'll take your word for it. "Authenticode" rings a (faint) bell.

It would work within a sufficiently large company, which applies something like Windows group policy across a domain.

I'll take your word for it. I remember trying to get my head around the difference between a domain and a subnetwork yonks ago, and failing. It seems to be an incoherent mess - every company implementing different things d

I don't know what you mean by "home" or "home office" environment - at least in some way that differentiates it from any other office environment.

A home environment has no dedicated IT personnel to try new programs in a sandbox to determine that they're not likely to misbehave in production. That's why some Slashdot users have proposed [slashdot.org] installing every application into a separate sandbox, but then that would involve work on Microsoft's part to add support in the system libraries and the Windows user interface for managing sandboxes.

I've got better uses for the money. But then, I don't make my living by writing software. [...] Is $200 more than you're willing to pay as part of the cost of being in that business?

As you started to recognize, not everybody who develops software does it as a business. Some are employed in other field

If you get your pet free software project picked up by a company or a software foundation, signing software for public use is well worth it. Otherwise, $100 to $200 per supported platform per year can make it a really expensive hobby.

You need a different signature key for each platform?? Weird. And probably unhealthy.

This bloody "new" discussion system seems to have lost my previous reply. What I want to know is, what did I do to get it turned on, so I can avoid doing that in future?

Do you know of a cross-platform code authentication system? You seem to have some sort of professional stake in code development.

I would guess that the only way such could work would be to distribute a package of pre-compiled binaries, separated into chunks (libraries) which are processor-specific versus processor-agnostic, together with lin

I am a developer - I run my compiler it generates an EXE - It get quarantined...

It simply is not practical in a "real world" situation except on a locked down one task PC

A firewall, the latest updates, and a user who cannot install/run new programs easily is far more secure (not perfect but more reliable)

I would like to know how the PC was infected : this is the only interesting bit - what happens after is largely irrelevant, once a PC can be persuaded to run arbitrary code then the payload can be anything

That will kill Web 2.0 technologies. Or anything where content/service providers expect you to run their code on your system. None of the schemes for whitelisting, signed certificates, checksums, etc. can handle the sheer volume of apps. that these new services expect you to handle. They work well for manually downloaded and installed applications and packages. But not when every kid with a FaceBook page has a game or other cure widget they want you to download.

It is perfectly possible to run programs that aren't trusted. You just can't allow them to do certain things. This is the main principle of sandboxing, and a good operating system should sandbox every single application completly, unless someone with administrator privileges requests otherwise. And even such requests should only be exceptions to the sandboxing.

I am running every program I don't trust sandboxed with sandboxie [sandboxie.com]. It isn't a perfect solution as it isn't as well integrated into the system as it c

It's possible to write a known good kernel and a matching set of registry hives (the whole thing can be dangerous) along with vmm, hiberfile and so on to DVD. Using BartsCD, one boots XP, does the restoration, and easily moves on.

There's a certain amount of sense in trying to protect groups of users, in business environments, and so on. An individual will be eventually cracked somehow on Windows. It's tougher to do on Linux, and still tougher on MacOS and xBSD and OpenSolaris.

The problem with a white list is that in order for it be effective it can't have too many false negatives. Having the white list validation program go ape shit over every file that isn't on it isn't all that helpful. I don't really want to have to hit ignore for every file in/home and most of my configuration files.
(To get around this you could just update the white list, but this would have to be done every time a file is edited, but this is too frequent, so what is the right frequency, etc.)
Also whit

Yes. To verify a system is uncompromised from a possibly compromised system is idiotic. If a person doesn't understand this then they are not a competent programmer.

I've said for years that most "anti-virus" companies are engaged in fraud and the CEO's of most "anti-virus" companies should've been in jail for it a long time ago. It shows how low the IT industry has sunk when even quite basic fraud like this is being allowed to continue. At the very least there should have been a class-action lawsuit.

The only way to truly verify a system is good is to do it from a known good system. For a standalone PC that means booting off known-good read-only media, usually a CDROM, and using that to verify the checksums of all the critical files on the hard disk. To handle updates the CDROM needs to have enough smarts to download signed checksums of updates off the net and storing them in encrypted form (so malware can't tamper with it) on read-write media, preferably a memory key only inserted into the system when booted off the read-only media.

Part of the reason this has not been done until now is that third parties could not easily read the proprietary undocumented NTFS file system, because BS OS licensing made it difficult and expensive to have a separate boot and because M$, incredibly, stopped shipping CDROM's of their OS. Now that NTFS has been reverse engineered it is possible to create a third-party Linux CDROM that can do all of the above. This is the only practical way to stop the Windows virus pandemic. Ironic that the best way to verify a windows system may be to use a linux system.

To anticipate a few questions:

Yes, Joe Sixpack is perfectly capable of inserting a CDROM, pressing the reset key and following the limited instructions (ie. get professional help if a virus is found or recover files off the known good distribution media).

Yes, this approach perfectly capable of protecting Joe Sixpack's personal files if the CDROM has enough smarts to back up personal files and check sum them every time it is run. Even if it doesn't do this it's still verifying the system is uncompromised.

Yes, it's perfectly capable of verifying every executable on the system, including those not initially distributed with the OS.

Yes, both whitelist and blacklist checksumming is possible at the same time. What a concept!

Good system/network administrators already automatically, regularly checksum verify all the systems they manage to verify their systems have not been corrupted, whether by a virus or a hardware error. It works. If they don't they are mediocre administrator at best.

M$ is perfectly capable of creating such a CDROM however those "professionals" have chosen not to and allow the virus/bot pandemic to continue. And they wonder why some people don't like them.

---

Ownership, by definition, is the right to control something. Any ethical (not legal) argument based on "because they own it" is bogus.

The only way to truly verify a system is good is to do it from a known good system. For a standalone PC that means booting off known-good read-only media, usually a CDROM, Here you have a slight problem with implementing your suggestion: The CPU boots off the read-write flash chips on the motherboard, not off the CDROM.

If you update executable files or libraries, you'd have to re-whitelist them. That means you essentially have to turn off the whitelist, update, and then tell the whitelist to baseline to the new system. While ideally that would work, it puts a lot of responsibility on the user which won't work out so well.

For Linux, it could be easier though since you could combine doing that in the package software (apt/yum/whatever), but because software on Windows all updates differently, it would be a nightmare.

Problem being that there is no such thing as known good code. Even if you saw all of the source code and compiled it yourself, there is always the possibility that the compiler or linker/loader introduced a back door (this problem has been known for a long time). The best you could say is that certain code is trusted. On the other hand, there is such a thing as known bad code.

there is always the possibility that the compiler or linker/loader introduced a back door

Problem being that there is no such thing as known good code.

I disagree, you can use gdb to go through the compiled binary and watch what it does, but since it is not yet trusted, even when doing that do it on a vm. same thing with disassemblers. If I've scoured through all of the assembly and still find nothing, I'd say it's known good code, can't wouldn't say the same for the libraries it calls until they are inspected also. You would want it to be a very special program to justify that kind of work though.

Don't forget, too, that the toolchain you're using to do your diagnostics can be the source of the hack.

...You can't trust code that you did
not totally create yourself. (Especially code from companies
that employ people like me.) No amount of
source-level verification or scrutiny will protect you
from using untrusted code. In demonstrating the possibility
of this kind of attack, I picked on the C compiler.
I could have picked on any program-handling program
such as an assembler, a loader, or even hardware

I've always found the concept of 'computer security' fairly strange. It's your computer, you control what runs on it...why are people running code that acts counter to their interests?why are operating systems designed in such a way that a user can have no idea what a program is going to do?

You serious? "just don't run code that is malicious" is a ridiculous argument. What if it's a shareware program they get that's been tainted to also install a trojan? What if it's a worm making use of a 0-day vuln and they don't even have to manually run something? Computers are just more complicated that that, sorry.

In fact you are wrong.Computer aren't as complicated as that.It's easy enough to design a system to make obscuring the purpose of a piece of code impossible and then have all programs define a contract with the system as to what resources they need to use on the system, this information is conveyed to the user in a nice way and now the user will know straight away if a program is going to act maliciously before they run it.

0-day arbitrary code execution vulnerabilities are created due to a small set of thin

and then have all programs define a contract with the system as to what resources they need to use on the system

In other words, you're recommending sandboxing. That is a solved problem on OLPC [laptop.org] and on FreeBSD [wikipedia.org], but as far as I can see, no such software for creating and managing sandboxes comes with home editions of the Windows operating system.

This is only an issue for a complete Turing machine, by limiting what a program can do you can avoid this problem.

The relevant parts of a possibly malicious program to a user or admin is how it interacts with the rest of the system. Because what ever it's doing is mostly irrelevant until it's outputting it to somewhere. This is very easy to notice and impossible to obscure. As all of this interaction goes through calls to system libraries

and then have all programs define a contract with the system as to what resources they need to use on the system

In other words, you're recommending sandboxing....I can see, no such software for creating and managing sandboxes comes with home editions of the Windows operating system.

The relevant parts of a possibly malicious program to a user or admin is how it interacts with the rest of the system. Because what ever it's doing is mostly irrelevant until it's outputting it to somewhere.

And sandboxes are designed to control how a program interacts with the rest of the system.

I was recommending language based system security(singularity, inferno etc).

Most languages still can't parse string arguments deeply enough to distinguish open() in the user's home directory from open() elsewhere. That's the responsibility of runtime security such as ACLs or capabilities, and sandboxing is just a finer-grained way to assign capabilities than the traditional user/group model.

Why even run untrustworthy code?

Because the major vendors of computer hardware for use in a home environment have declined to provide a

And sandboxes are designed to control how a program interacts with the rest of the system.

Sandboxing is usually about controlling an untrusted program and denying it access to requested resources it's not authorised to access. I'd prefer a program was trusted and didn't make requests for access to unauthorised resources.

Most languages still can't parse string arguments deeply enough to distinguish open() in the user's home directory from open() elsewhere...

Yeah, so you don't even include open() in the standard lib of the language, so the programmer can't even make the request. Then you create a different syscall that's more restricted. Similar to how the Bitfrost #P_DOCUMENT [laptop.org] section handles it.

Why even run untrustworthy code?

Because the major vendors of computer hardware for use in a home environment have declined to provide a convenient way to mark code developed by an amateur programmer as trustworthy.

As always, the only thing you can do is keep your software updated, don't run programs or code you don't trust, don't let people on your system that you don't trust to keep the system clean, and hope for the best.

But when people say that we should have only one distro, and that it's a problem that different distros use different versions of software and insert their own patches...this is why they are wrong wrong wrong.

Except that a lot of distributions are based on only a handful of larger distributions. Any bugs present in the parent distribution can surface in all of the others that are based on it. Debian's OpenSSL flaws are a good example.

The differences between Linux distros are big enough to annoy programmers with better things to do, but small enough that you can still write a virus that works on all of them if you want to. So it's actually the worst of all possible worlds.

I'm not sure if this is still the case but back in the day using an exe packer (like upx [sourceforge.net]) on a trojan or virus would prevent detection by most anti-virus software and as an added bonus the payload also becomes much smaller

Of course this doesn't really apply to web browser hijacks, but you can at least intercept a lot of your outgoing traffic. The problem is that most people just click the ok button willy nilly because they want to see it go away.

I definitely know what I'm doing and I use my outbound firewall to its fullest extent. Having the ability to proactively determine what software can and can't touch the network, be it establishing a connection or binding to a port, in conjunction with a proper hardware solution provides not only good protection, but also serves as an early warning system when an unknown program attempts to go to an unknown site for an unknown reason.

Granted, outbound firewalls are not perfect. If a whitelisted application is compromised, then it this firewall doesn't provide much protection. This is why outbound firewalls should be but one of several items in your security toolbox.

However, to wave your hand and claim they are only for people who don't know what they are doing shows a level of arrogance that usually gets corrected only after you are compromised.

Most of the major antiviruses should be able to detect this, except maybe Norton. Kapersky adds detection code to their database for newly discovered variants within minutes of when they appear - 17 times on 10/24/2008 for example [kaspersky.co.uk]. With a metamorphic engine this advanced it's likely that you can find a variant that Kapersky will never see. Kapersky is now watching nearly 700 variants of this one threat to date. This is what makes the databases for a modern antivirus engine so huge.

I don't think the infamous "isroot = 1" is an example of obfuscated code.

It is actually quite straightforward. I didn't RTFA (but again who does?;-P ), but I guess the "obfuscated" malware is something like a just-in-time code spitter: the attack code is generated at runtime, on-demand, in an obfuscated manner, bypassing common antivirus software. If the payload is not hard-coded, the malware can masquerade itself as an innocuous application more easily.

According to the Register article, the method of attack was DOM manipulation. The code waits until it sees a login form from a targeted site, and then it injects markup that sends the credentials to the bad guys on submit.

We can speculate on whether that's true or not, but if it is then it should be fairly easy to use a bit more javascript (why not? heh.) to check the integrity of the DOM. Banks should also be randomizing the structure of their forms and the names/ids of form fields as a matter of course.

Of course the attacks will evolve, but as long as you're going to play the game you've got to keep moving.