Posted
by
CmdrTaco
on Monday May 16, 2011 @10:40AM
from the boot-the-root dept.

Trailrunner7 writes "A new version of the venerable Alureon malware has appeared, and this one includes some odd behavior designed to prevent analysis and detection by antimalware systems. However, this isn't the typical evasion algorithm, as it uses some unusual encryption and decryption routines to make life much more difficult for analysts and users whose machines have been infected. Alureon is a well-known and oft-researched malware family that has some rootkit-like capabilities in some of its variations. The newest version of the malware exhibits some behavior that researchers haven't seen before and which make it more problematic for antimalware software to detect it and for experts to break down its components."

On the other hand that could be achieved with any USB stick with a write protect switch.

That would be the proper procedure that I would find perfectly acceptable, but all the present day USB sticks with write protect do it with software. It's not like the floppies that made it physically impossible to write by literally turning off the ability to write. It's one of the giant steps backwards that the industry has made.. intentionally? I don't know, but my suspicions run high.

I did that because when I worked in repair I needed working copies (they get damaged) that would not lose the tape. You can buy disks without the notch so there is no protect to fall off. To prevent accidents, the switch I put in the drive was a reed switch. It required knowledge of the switch as well as the pocket screwdriver with the magnet in the end to turn on the hidden write switch while writing another working copy of the diagnostic floppy disks.

I did that because when I worked in repair I needed working copies (they get damaged) that would not lose the tape. You can buy disks without the notch so there is no protect to fall off. To prevent accidents, the switch I put in the drive was a reed switch. It required knowledge of the switch as well as the pocket screwdriver with the magnet in the end to turn on the hidden write switch while writing another working copy of the diagnostic floppy disks.

None of the floppies in the field service kit had a write enable notch. It makes no sense taking one customer's infection and giving it to someone else. The modern replacement is a burned DVD instead of a thumb drive. Use read only media for any of your service materials. No exceptions.

Yeah, the 8 inch floppies actually got it right. They had a Write ENABLE sticker. If the Notch was NOT covered over, then the disk was automatically Write Protected. The rationale was that a sticker can NEVER "fall ON". I would imagine that whatever evil engineer inverted that logic did it because he was either pressured to, or was tired to digging around to find write-enable stickers...

If the Notch was NOT covered over, then the disk was automatically Write Protected. The rationale was that a sticker can NEVER "fall ON". I would imagine that whatever evil engineer inverted that logic did it because he was either pressured to, or was tired to digging around to find write-enable stickers...

More likely they wanted a way to "write-protect" a disk in such a way that the user couldn't* make the disk writable again. Yep, that's right... DRM.

The W/P switch in old floppy drives wasn't a "request not to write"; it actually disabled the HARDWARE enable input to the WRITE CURRENT driver in the R/W head. The only way it could fail was if the microswitch that read the "tab" (or the optical sensor in the 3.5 inch version) failed; or, if you had an Apple ][ disk drive, if static zapped the 74LS125 on the 5.25 Shugart drive's board (a somewhat common, VERY nasty problem, which resulted in the write/erase c

The W/P switch in old floppy disks was just a "request not to write" because, although the floppy drives themselves (if they were correctly engineered) implemented this in such a way that the write hardware was actually disabled if that latch couldn't close, there is absolutely nothing preventing the floppy disk from being written if the floppy drive doesn't care whether the write-protect tab is closed.

It's been done. I still have an early-generation Memorex USB stick from 2002 that has a physical write-protect switch on it. (basically a little recessed switch on the back much like the one on SD cards) When the switch is set to read-only, it works consistently across all operating systems so it's definitely a hardware lock, not software. The stick is of such durable construction and so reliable

Running a persistent Linux OS off of it and the hardware write protect switch is nice as Windows tries to helpfully format anything that's not FAT/NTFS. Of course I've gotten kernel panics more then once booting off this when I forget to turn it back, as it fails to remount the root filesystem R/W...

That would be the proper procedure that I would find perfectly acceptable, but all the present day USB sticks with write protect do it with software. It's not like the floppies that made it physically impossible to write by literally turning off the ability to write. It's one of the giant steps backwards that the industry has made.. intentionally? I don't know, but my suspicions run high.

A floppy drive is easy - a floppy drive is just some motors in a cage - the floppy controller resides o nthe motherboard and tells those motors how to operate. The write protect switch can easily disable the floppy drive's write amplifier.

Something like a hard drive is hard - you can't disable the read/write line to the (PATA) drive, because you have to write to the registers in order for it to work. It's why forensic labs have drive write blockers - they pass through everything except the write commands - these things require intelligence in order to perform their tasks.

Ditto USB drives - you can't disable writing to the NAND flash chips itself, because you have to write to them in order to read from them (as well as do things like identify the capacity and such), so the controller has to have intelligence to handle ignoring write commands from the USB host (and even then some drives still do wear levelling and garbage collection on the raw media - so you need lots of firmware hooks to disable that, too).

The problem is, there's no way to physically make it impossible to write. Some flash chips it was possible - you protected it by disabling the high-voltage programming power source - without that voltage, programming would be problematic. But these days, the charge circuits to do that are built into the silicon so the manufacturers don't have to spend the extra dollar on external power supply circuits and PCB routing, because the intent for writable nonvolatile memory was being able to write to them.

Making a write-protect switch these days is difficult and often requires extra circuits in order to have the necessary intelligence to block write commands and not all writes (which disables normal read operations as well).

Then how do sd cards handle the write protect switch they have and by the way, all my sd to usb adapters handle the write protect switch just fine (so there's your protected media)It's obviously not impossible or not done before, I even have an old 128m pny stick with a wp switch built right in.

Apparently, it's handled in software or firmware on the host's side. There's feed back on the forums of people who've hacked it hardware style (short it, cover it)... I'm too lazy to keep looking for a software hack.

Good question though. The answer is: it's not very trustworthy, as the host has to politely refrain from writing, instead of it being the device to becomes physically un-writable.

I would have found that hard to believe before having seen it in action myself.

My camera uses an SD card of course, but it can use that open source camera software too. But to use it, you have to write to to a new card, and then turn on the write protect switch or the camera won't boot it. Once thge new software is booted, it can save pictures to the card. Good proof that the write protect on the SD card is more of a "suggestion" than a "switch".

And even the old floppy disks relied solely on the good behavior of the host system not to write to the disk.

Sure, there was the notch that you either filled in or opened up, and there might have been circuitry inside the drive to detect that and actually prevent writes. But an attacker could have easily covered the lens (in 3.5" drives) or rewired the circuits on the drive (on 5.25" drives). Now, those two attacks require the complicity of the user - but now it might just be a JUMP instruction in the d

A floppy drive is easy - a floppy drive is just some motors in a cage - the floppy controller resides on the motherboard and tells those motors how to operate

My guess is that you've never actually SEEN a floppy drive.

Even the most hardware efficient floppy drive of all time, the Disk ][ drive electronics designed by Steve Wozniak for use with the Apple ][, used something like 8 TTL and analog chips on the floppy drive itself, plus some transistors, resistors, capacitors, an inductor, and a few other components. This is IN the drive itself. This connected via a 20-pin (IIRC) ribbon cable to a peripheral card in the computer with 5 more chips on it, including a

I always try to visualize it from both sides - as many times as a floppy was made secure that way - it could just as easily be made unsecure - those aol floppies could be made read-write by literally cutting a hole where the floppy was sealed read-only.

They probably couldn't find a feasible way of including a physical switch ON the usb side of the device, though that would be pretty sweet.

And then there's the convenience versus security aspect of it - I can remember plenty of stories of aspiring astrono

Because then security leaks cant be fixed? I suggest at least some switch to update the software. On the other hand that could be achieved with any USB stick with a write protect switch.

If software can turn off "write protect" then you don't have anything. Period. Because anything legit software can do, malware can do. If it can do an update of the ROM image, then malware can as well (and there was a virus that overwrote or attempted to overwrite the BIOS).

That reminds me of a bit in one of the control registers [msdn.com] on the original IBM PC/AT motherboard. It's function: Mask the non-maskable interrupt. That's kinda like dreaming the impossible dream, isn't it?

There is another option. Have the ROM only run signed code, or rather privileged signed code. That way a virus can't replace critical parts of the system (rootkit). If you do it right you could even allow signed code to update the ROM itself, at the risk of being screwed if your signing key is leaked.

Not all that practical on a general purpose PC because of the need to sign every OS but games consoles have used this system since the original XBOX, or maybe even on the PS2 (can't remember now). Actually the

Malicious software can still be malicious while in memory, send spam, botnet etc. A running exploit of a readonly system is just as compromised as a running writable one, until you turn it off of course. You would never be able to patch it unless you patch the ROM or receive memory patches.

A kernel launched from write-protected, hence read-only memory, is going to be the same every time. Subsequent loads can infect a kernel that sits in writeable memory, where malware can do its work. ROMs just are not changeable, unless they're of a genre that permits this, like electrically-erasible programmable read only memory, or EEROMs, which usually take an electrical charge or specific freqs of light to allow change.

My problem with this kit is that we would probably prosecute someone that makes mal

I'm implying only that when initially read from ROM, it's as clean as it was written. Certainly any kernel can be subsequently infected, given current techniques. I know of no kernel that can't be rooted, given various techniques, and possibly a soldering iron+.

The point is that the ROM doesn't need to be infected. The system has to load into RAM to actually run, and if you can't patch the OS (easily or at all) you can't fix things like remotely-exploitable buffer underruns.

Then you just end up with malware that network-boots: as soon as you fire up your pristine kernel and connect it to the network, one of the other infected machines on the network re-infects it and the malware is free to do whatever it wants in user-space (send spam, data-mine, participate in a

Yes, it's fairly simple. Say the ROM has a remote vulnerability and the malware is complex enough to recognize a RO primary partition. Say it injects an alias for shutdown into the environment when it detects a RO root partition. When a it shutsdown command is issued, because of the alias it also sends a packet to a C&C server saying, hey you need to expoit me again. The C&C can then try to reinfect directly, or command another computer on the LAN to do so when it detects the computer has booted up

What you describe is an "autostart" or re-infection at startup. If a path can be found to infect once, it can often be utilized again. Once an infection "sticks" in this way, I've seen malware do random port pings on subnets it finds.

Only for major major updates, and it wasn't a pain in the ass. You unplugged the chip and stuck the new one in. Back then it was pretty common for users to hack their Amigas anyways, so it wasn't that big of a deal to open her up and swap it in. The pain the ass was expanding the chip memory by soldering lines to a new socket. I was 12 when I had to do this for my Amiga 500. Worked fine.

Other systems at the time were updated by sticking a floppy in the drive and either booting directly from the new disk, or copying files to your hard drive (if you had one). Some users were comfortable replacing the chip themselves but many ended up having to go pay a computer shop to do the upgrade for them.

Amiga's had some features that were totally ahead of their time, but imo this is more of a design flaw than a feature to be reincarnated in new systems. Commodore apparently recognized this as well,

1- above all, there was a lot less of it. Win7 is rumored to be about 50 million lines of code. I can't find the C64's rom size, but it's at least 2 orders of magnitude less.2- there were no security issues requiring frequent updates. the C64 was not connected to the internet, and the basic OS was in ROM, so any security holes remained un-exploited3- nobody cared about bugs, especially since the OS did so little anyway. I never had the money for a C64, but my ZX Spectrum had plenty of bugs.4- I rem

Kickstart was more of a BIOS-equivalent than an OS. You couldn't do anything with Kickstart by itself, kickstart booted the actual OS (Workbench). Some RiscOS machines OTOH did boot a reasonably advanced GUI OS from ROM, in fact if I'm not mistaken there are some such still in production.

I had a couple of RiscOS fanatics for friends (i was in the commodore camp), and afaik they had a 2MB ROM to boot from, which included memory protected processes ('modules') and a configurable desktop environment, a taskbar/taskmanager hybrid as well as an assembler/BASIC editor/assembler and some other tools.
Also grandparent must be trolling, since those OS'es was so incredibly small and uncomplicated that it should in fact be possible to write them with zero bugs whatsoever.

Kickstart as a "BIOS-equivalent"? I don't know of any BIOS that support windowing, where you have terminal windows and common commands available to you, including the ability to launch new terminal windows. On the other hand I don't remember configuring drives or boot order from KickStart. On an Amiga, KickStart may not have been a full system, but it was not like a BIOS either.

Again, I think you're confused. You boot up an Amiga without a disk or hard drive in, and it shows you a disk icon and waits for something to boot from. What you're describing is Workbench (the terminal windows are CLI windows in Amiga-speak) and you need a disk to boot it. You can make a boot disk to boot to a CLI without booting a full Workbench, but you don't boot to a CLI from Kickstart alone. At least, not with Kickstart 2 or 3 machines - I had an A600 and an A1200, and I'm pretty sure Kickstart 1.3 in

you are seriously confused, and don't know how an Amiga worked. The Workbench was a full blown GUI, not a mostly-there gui. Having a shell window without a desktop is *not* Workbench. The A1000 (and later the A3000) differed in that they had some of the information on a kickstart floppy rather than in ROM which *did* require a floppy to boot (I think the A3000 was able to read its kickstart from a hard drive, but to be honest I don't remember).

An OS chip would be significantly better than the terrible mess we have now.This way it creates a logical separation of OS and user space, too.It would force OS developers to rethink what an executable can do with respect to the OS.

Not to mention that it is much easier to do now due to SSDs being seriously cheap to produce in the sizes that are required. (even for the awfully stupid sizes of Vis7a)

I already put my OS on a separate partition as it is. The whole Program Files directory is a symlink

If that's what you want to do, it's not that hard to do. SATA to SD Adapter [soarland.com] Just set the card as read only and then only change it to read write when you need to do an upgrade. Or since it's and SD card you may as well just image the firmware to the card from a different computer.

As someone who tearfully sold off the last of my Amiga hardware (an A4000 with a BPPC 233 604 board and my Amiga 1200) the entire Amiga OS really wasn't in ROM - it was really just the stuff to bootstrap the OS, libraries to handle mouse/keyboard io and dialogue boxes and windows. 3.1 had workbench.library in rom too, but I'm really not sure why.

The vast majority of the OS was still on disk;).

In other words: the ENTIRE OS wasn't restored every time you switched the thing on which is what the parent wants.

A more interesting question would be why systems are still so shitty at even basic self verification. A Linux might verify a packages signature on install, but after that, there is absolutely no oversight about what is happening to that package. On a regular dist-upgrade it can't even properly tell apart which config files have been touched by the user and which have been automatically generated.

This is not even an especially hard problem to solve, instead of dumping everything into a single directory tree,

A more interesting question would be why systems are still so shitty at even basic self verification. A Linux might verify a packages signature on install, but after that, there is absolutely no oversight about what is happening to that package. On a regular dist-upgrade it can't even properly tell apart which config files have been touched by the user and which have been automatically generated.

This is not even an especially hard problem to solve, instead of dumping everything into a single directory tree,

And how do you propose that the "pristine" packages below it are updated without giving malware the same priviledges or ability to update those packages with infected versions?

Packages and their updates have a proper signature from your distributor, malware doesn't. The point here isn't so much to create the one true final solution to computer security, but to have some robust tracking of origin of a package and its containing files, on top of that you could then build a whitelist, WoT or whatever to improve things even further. As of right now there really isn't much of a build in form of tracking for what an application does to your system or how it was modified.

Sadly, the end result is there isn't any way to have the openness of a PC without having the dilligence of being able to maintain it properly.

Because then all they have to do is figure out a buffer overflow for the default browser and you can't patch it so you're boned? As a PC repairman my question would be this....why bother? Do you have ANY idea how many unpatched XP boxes are out there? Boxes with NO AV, or the same trialware Norton crap it came with in 05, loaded up with P2P crap or running "Razr1911 Pro SP2 Corp" that has WU turned off to keep from getting WGA'd? If the number was less than 60 million frankly I'd be amazed.

So I don't see why they are bothering with this now when they have so much low hanging fruit left, unless they are planning on using it for a spear phishing attack. The time to be releasing something like this would be about 6 months before XP EOL, when the amount of unpatched "Razr1911 Windows 7 all versions pre-activated" will be much higher, although even then most likely all the updates will be turned off (already seeing that BTW, as MSFT figured out how to kill the Razr1911 OEM hack on the RTM version so pirates are just killing WU like they did with XP) so again hacking will be easy.

As a guy that cleans them for a living I can tell you infecting a Windows box simply isn't that hard, not because MSFT built a bad OS (I'd argue that properly patched an XP or 7 box is actually pretty solid) but because there are so many pirated versions, boxes controlled by people that will happily click on any email attachment, or download "Hot_Lesbos.avi.exe" and run it without a second thought.

Hell Limewire has been dead for a couple of years yet I still see new boxes infected with malware calling itself "the new Limewire" because simply ripping off the old Limewire icons is enough to get the clueless to happily turn off any security that attempts to stop them installing it so they can snatch the latest pop crap. Social engineering with literally millions of clueless users makes it butt simple to infect masses of boxes with just a little carrot at the end of a stick. This seems like a hell of a lot more work than required unless they have some corporate target in mind.

You're missing part of the discussion where a disk or USB stick with true physical write protection will mitigate the problem considerably.. I don't really care what the 'clueless' do. If they want to hose their systems, that's just more business for you and me. I just want something to protect myself. Word of mouth will catch on in due time... For now, I make images of fresh installs to save myself and clients a great deal of time.. What used to take two hours is fixed in less than 15 minutes. Booting into

Social engineering with literally millions of clueless users makes it butt simple to infect masses of boxes with just a little carrot at the end of a stick.

And I'm happy with that. It's like the story of the two campers trying to outrun the bear. One says its hopeless. Bears are too fast. The other says, "I don't have to outrun the bear. I just have to outrun you." As long as there are millions of clueless users out there as low hanging fruit, us people with more secure (not perfect, just better) systems and a clue about not surfing for pr0n as root will dodge the bear.

And if they start coming after Linux systems, I'll just switch to something nobody uses so

I also happen to be a PC technician, and I find it tiresome to hear people tirade about how bad Windows is, or how "clueless" users are. Software vulnerabilities are a fact of life, and it's unrealistic to expect average users to tell a fake warning from a real one when they can look pretty much identical.

Here's a car analogy. If I paint a phony detour sign that looks exactly like a real detour sign, stick it up in the middle of a road, and traffic starts diverting down a street of my choice, does that make

Sorry but you're full of shit AC and I have NOTHING against old Razr1911, as a matter of fact I often use his "XP Mini Pro" with my own keys for low RAM boxes. hell of a lot better than WinFLP.

But if anyone doesn't believe me here is a little test: download "Windows 7 X86-X64 all versions".ISO. That is the RTM version BTW. Now let it patch to Sp1...I'll wait...uh oh! Did you just get WGA'd? Yes you did. Of course you can kill it in about 25 seconds with "WGAKiller All Versions.exe" but that doesn't chang

If your motherboard has a TPM chip, you could set up a trusted boot sequence, insuring that the OS is unmodified. You can then make the OS execute only signed executables, making any modifications to installed software impossible. Malware would also be prevented from running.

Uncle (below) answered it adequately - that the OS would reboot with a 'pristine' state - including the same flaws it had before. While this would frustrate some forms of trojan or malware, it certainly wouldn't even begin to stop it all.

You can do something similar with virtual machinery, but the pristine VM could get corrupted too... becomes a chicken/egg question if the user isn't too awful computer-savvy.

A new version of the venerable Alureon malware has appeared, and this one includes some odd behavior designed to prevent analysis and detection by antimalware systems. However, this isn't the typical evasion algorithm, as it uses some unusual encryption and decryption routines to make life much more difficult for analysts and users whose machines have been infected. Alureon is a well-known and oft-researched malware family that has some rootkit-like capabilities in some of its variations. The newest version of the malware exhibits some behavior that researchers haven't seen before and which make it more problematic for antimalware software to detect it and for experts to break down its components.

A new version of well-known Alureon is out which has odd things to make it hard to analyze. It's odd, and is not normal and makes it's hard to analyze. It's well known and is a rootkit.The new version is odd and makes it hard to analyze.

We got that after the first sentence, how about actually providing some fscking detail.

Something is happening that is new, but we can't describe how or why it is new. We're like Roy Scheider in Jaws: "You're going to need a bigger boat." And you're like Robert Shaw: you just get to work trying to catch the thing. Even though it is big, it's bad, it's silent, it runs deep, you don't have the tools to properly track, capture, kill or otherwise defeat the thing, and you will be dead in 15 minutes at the end of the movie anyway. So just run around and panic. Because rootkits are scary, and strang

Something is happening that is new, but we can't describe how or why it is new."

Reminds me of "Winter 2008's smash hit" (literally). Conficker.B worm [wikipedia.org] featured a mysterious payload hashing which turned out to be the first known "production" use of MD6 [wikipedia.org]

I think this is what I scraped off my nieces computer a few weeks ago. She's one of those "I'll click on anything!" types. I did notice a lot of 'free' games, limewire (thought that was odd because I thought it was dead) and about 9 'virus scanners/protectors', along with the one that tells you that you've got a virus and wants a credit card to enable the 'full version' that will remove them...

It had disabled Microsoft Security Essentials, all windows updates, and when you tried to run a browser instance

You'll be wanting this then: http://www.gmer.net/ [gmer.net]
Anyone who's removing spyware on a daily basis should be using this as well as MBAM etc. It's not perfect (what is?) but I've found a few rootkits with it.

Summary says: "The newest version of the malware exhibits some behavior that researchers haven't seen before"

The article says: "In 1999, a new virus, Win32/Crypto, was discovered... Today, in 2011, variants of Win32/Alureon are bringing this old-school technique back to life... Another interesting tidbit is that an initial version of this obfuscator first arrived in our lab in the first half of 2009."

That's kinda stretching the definition of "haven't seen before", which may be true in a technical sense (because they haven't seen THIS EXACT MALWARE before, but they've certainly seen lots like it).

Yes, because you can make it harder to detect the running patterns. My understanding of the article is basically that it encrypts it's own execution path so that the individual sections of code can't be followed until they run. They also avoid actually storing the key in the executable making it difficult to detect the running code as it will not match patterns as easily. It's an old technique being applied to a newer system, but it is interesting since it is a step up in complexity of an already complex

"We're closely monitoring Alureon to ensure that our users are always protected. In fact, Alureon has been part of the Microsoft Malicious Software Removal Tool (MSRT) since April 2007."

I am putting my full faith and hope in to the Microsoft security team to eliminate it with their latest Malicious Software Removal tool.
I have given up on being paranoid about viruses, and I am much happier now!

So, the malware has executable payload chunks that are encrypted and spread around (locations obscured) that must be decrypted prior to execution of said payload.

I get that this makes it a little bit harder to figure out what the program is about to do (hint: allow it to decode, breakpoint & step), but isn't the point to simply identify that the malware is present? Unless the malware is capable of executing encrypted code on the chip, the code that decrypts the remaining payload code must be stored in plain machine code.

The machine code that initiates the brute force will be identifiable, and a signature can be made. Nothing to see here folks. The shitty encryption system doesn't even use asymmetric keys, and the very fact that it only takes 255 tries for it to brute-force one of its "chunks" is laughable. I mean -- I wrote better cipher systems when I was 12... Are they trying to avoid breaching US encryption export laws?!

Who cares how good it is at hiding its payload if the code that decodes the payload has a fingerprint...

P.S. What really scares the shit out of me is new processor tech that enables public key crypto at the machine instruction level. Not only will the "good" guys use it to "protect" their code from their user's prying eyes, the malware writers will use this to actually design code that has no fingerprints. Each copy will be indistinguishable from pseudo random noise -- So much for "signatures" at that point.

P.P.S. Once you know malware has executed on the system, it's time for a full wipe, BIOS re-flash, and OS re-install -- There is no "removing" malware.