I may be mistaken, but I thought blue pill was similar to a VM, but was actually a hypervisor exploit. It sounds to me like having dedicated root kit support built into the chip via the hypervisor would be different than running an OS image inside a software based virtual machine.

Virtual Machine Monitor and Hypervisor are synonyms. Hypervisor however generally implies a monitor running on the bare hardware (type 1 virtual machine) whereas VMM may also refer to a monitor running as a userspace process on a host kernel (type 2 virtual machine). Thus it is correct to call bluepill either a hypervisor or a VMM.

Generally the term VMM is much more common with implementors of these systems, however hypervisor is easier to say and sounds cooler so its common with users.

Virtual Machine Monitor and Hypervisor are NOT synonymous - they usually come in the same package, but this is not required.

An example Virtual Machine Monitor without a Hypervisor is VMware Workstation: a small VMM is loaded to run the guest OS, but it is not complete enough to run the system - it has no task switcher, no memory manager, etc. The host OS acts as the hypervisor here - it is the source of highly-privileged operations unavailable to the guest. Another no-hypervisor VMM is KVM: KVM just runs a virtual machine, but depends on the rest of Linux to run more-privileged operations (and Linux itself becomes the hypervisor).

An example Hypervisor without a Virtual Machine Monitor is the partitioning software on high-end IBM, Sun, etc. machines, which allows you to physically partition the processors of the system into several actual machines - partitioned machiens with zero run-time interdependencies. Literally, a "hypervisor" is something which runs at a privilege level higher than the "supervisor" (the OS).

Hypervisors and virtual machine monitors have existed since the 1960s. Nobody confused the terms then. IBM started the confusion with a whitepaper [ibm.com]"inventing" the type 1 / type 2 taxonomy to distinguish between 1960s-modern IBM mainframe architectures (low-end = hypervisor only, high-end = combination hypervisor/vmm) and the VMware Workstation architecture (host OS loads vmm; host OS acts as hypervisor). Note that VMware never claimed Workstation was a hypervisor! Certain communities (Wikipedia, the press) have accepted IBM's whitepaper as gospel truth, thus the prolifieration of "type 1" and "type 2" terms the past several years. (The same community has chosen to ignore academic research in the 1960s and 1996-2005 which used VMM and Hypervisor correctly.)

With apologies to many individuals who are legitimately using correct terminology, some poorly-informed folks are propagating the "type 2 hypervisor" meme to attempt to equate the abilities of a hypervisor/VMM with a VMM. This is not correct: a combination hypervisor/vmm ALWAYS can achieve better performance than separating the hypervisor and VMM - at the cost of creating a more complex hypervisor (ESX requires custom drivers; Xen requires a customized dom0). The fault for this confusion really rests with Intel: their VT extensions (and AMD's SVM response) have made it so easy to create a VMM that some folks are creating a VMM, then marketing it as a hypervisor in a misguided attempt to compete with existing hypervisors (ESX, Xen) instead of competing with other VMMs (VMware Workstation/Fusion, KVM, Parallels Desktop)

To understand what a VMM is, read this ACM article [acmqueue.com] by Mendel Rosenblum. Academic research generally looks at VMMs (ways to run a virtual machine), not hypervisors (ways to run something with less privileges than the hypervisor). A rough gage of the quality of academic work is whether they manage say Hypervisor when they mean Virtual Machine Monitor. Anyone who thinks the two are the same is ignorant of the past ten years of academic research - and anyone ignorant of ten years of research is doing very poor-quality work. (Alas, Wikipedia chose to use the IBM whitepaper for defining terms instead of many years of published, peer-reviewed papers. Great "neutrality", folks!)

Actually, when people are being aware of how they're mistreated, and protest it loudly (enough for others to notice), I don't think they qualify as being sheeple.Well, maybe except those who still buy sony music.

I stopped buying music-cds altogether when one of them installed crap on my winbox.

I stopped buying music-cds altogether when one of them installed crap on my winbox.

How did they install crap on your winbox (are you running a ssh server)? I suspect that you installed that crap, or that your OS' virus-support feature installed it for you as a "convenience." Software, no matter how bad, sitting on a CD doesn't just execute itself. Something or somebody (and it wasn't Sony, because they had not yet compromised your machine) decided, "Let's loa

They exploited the windows behavior named Autorun.They used it to install crap on my computer, when all i knew was word games.They placed it there with the intention of this happening.That is why i claimed they installed crap on my computer.

Avoiding the music label, not sure it was Sony, is my way of saying thank you and fuck you too.And yes, i will be damned sure to accuse them of doing it, because to the average luser, that's what happens, completely without interaction or information and with every bit o

Unfortunately not, i'm an addict of certain half-life 2 mods, total annihilation, supreme commander and toribash.After that, I did stop windows from autorunning anything.I have other machines, i run 3 different operating systems (windows xp pro(gratis), linux (debian) and minix )Tbh, i use windows for more than gaming, i find it easier/faster to mash up a demo of a program i want to do in VS than in Eclipse.

Also, i refuse to be ignorant of what OS is on the majority of my potential customers computers.I'm c

Where exactly are you going to buy a complete system with a fully documented processor, BIOS (or equivalent firmware) and all component parts right the way down to the Verilog (or [insert chip design software here]) source files?Bearing in mind that even then you need to prove that the chip you hold is the same one described by the source files, and the only way you can guarantee that is if you control the chip fab which produces the chip. Failing that, I suppose you could skim the top off one and examine

Of course, this basic problem [bell-labs.com] was described quite eloquently by Ken Thompson. He went after the compiler, but the problem of proving that the binary you have matches the source you have is a tricky one no matter what.

There actually are some very clever solutions to try to catch cheating compilers like this, but none of them are trivial. It's a cat and mouse game, and there are actually proofs that winning either side completely is impossible.

Surely you bootstrap things by hand-assembling a simple nonoptimizing C compiler (nonoptimizing compilers aren't that hard, surely one could be written in assembly), then use that to compile your C compiler (thus getting a binary that matches your source), then use that to recompile itself (so you have a C compiler that can actually run at semi-reasonable speed, and whose binary matches its source assuming you verified its source wasn't doing anything nasty).

Now you've simply pushed the problem a level higher, into the assembler / linker. Yes, it helps, but there are other techniques as well.

The most elegant technique I've seen goes like this. Maintain trusted dead-simple non-optimizing compiler A (possibly on a different machine). You also have untrusted compiler B, and it's alleged source SB. Compile SB with B, resulting in B. Compile SB with A, resulting in B'. These binaries will be different, but should be functionally equivalent -- B' is probably

I'm still convinced that it's possible to make a VM that appaears to software running within as real hardware.

The paper, however, takes a practical approach, examining how some industry standard VM-s operate, such as VMWare and Virtual PC.

Those VM-s take plenty of shortcuts to improve performance, and don't virtualize some instructions, rather remap them, or "shift rings" of execution etc. as much as possible so to take advantage of the hardware while remaining sandboxed. They don't virtualize the clock as well, so you could time the performance.

A rootkit isn't competing with other rootkits based on performance, it does so based on how undetectable it is. It's arguably a different problem. I think we're yet to witness what a full blown VM made to be a rootkit will act like, and whether it'll be detectable.

The problem is, that if the VM writer tries to take every possible method to make the execution time similar (e.g. make privileged instructions run as fast as non-privileged instructions), it has to slow the faster ones down. Suddenly, even your grandpa will notice something is wrong. The most insane method would be a VM based on a full-blown, cycle-accurate simulator, but that will be horribly slow.

Instead, what I think is it's not *impossible* to detect, but it's *difficult* to detect, because the VM detector is going to need a very very very long checklist to determine whether it is running on a VM or not. To be sure, it must check every possible privilegd instruction's timing, check the system memory's contents using various workarounds (such as DMA), and etc. etc.

The problem is, that if the VM writer tries to take every possible method to make the execution time similar (e.g. make privileged instructions run as fast as non-privileged instructions), it has to slow the faster ones down. Suddenly, even your grandpa will notice something is wrong. The most insane method would be a VM based on a full-blown, cycle-accurate simulator, but that will be horribly slow.

Two things:

1. You assume the clock isn't manipulated, hence fast commands should be slowed down to match virtualized instructions. Instead the direct instructions may be left running, and the virtualized to skew the clock subtly enough to be undetectable to the naked eye, and match well with the hardware performance to a detector running within.

2. We're soon about to get plenty of cores on desktop machines, where most of the tasks are serial. If a VM would make use of the extra cores to simulate a single core in around 50-60% its native speed, it may prove undetectable to granda who just browses the net and uses Excel.

Two things again:1. Do you really wish to manipulate the clock for every non-privileged instruction, which will result in a horrible VM performance?2. Yes, your grandpa won't notice a 50% slowdown, but your anti-virus software will easily notice. It's either your grandpa doesn't notice and your anti-virus does, or your anti-virus doesn't and your grandpa does (assuming the anti-virus software does a extensive amount of checking)

What I was trying to say was that it takes a painful amount of performance over

As soon as your grandpa connects to the internet, the AV can just poll any time server on the net, including inofficial ones set up by the AV vendor, using different ports and even possibly a different protocol. Indeed, the timing information could even be implicitly included in the communication with the AV update server. Since there's an external server involved, the root kit cannot control all aspects of it unless the update server itself is compromized (if the update server uses public key kryptography

Remember: the hardware configuration the software sees is what the rootkit opts to report.

This is where this all falls apart. It's pretty trivial to notice if the hardware you're running on has changed, and as mentioned in The Fine Article/Paper, it's bloody impossible to emulate every possible hardware combination out there, or even any of the common ones. I'd love to see an virtual machine that can fool my nVidia driver that it has an 8800 and can run fast enough for even basic desktop usage, let alone any sort of multimedia or gaming. It's just not going to happen.

This is where this all falls apart. It's pretty trivial to notice if the hardware you're running on has changed

Trivial for who. How often do you scan and compare your list of hardware devices in normal operation. How often does your mom.Remember: trojan makers aren't interested in hacking hardened hacker's computer. They're interested in the mythical mom.

If my antivirus will detect hardware changes then it'll whine on actual hardware changes too. We arrive at the fact that Joanna discovered: it'll be god da

It doesn't have to, because it doesn't have to be a full fledged multi-VM VMM environment, since the whole point of VM-based rootkits is to move the malevolent code outside of the realm of detectability by the OS. This is why the paper sucks. It makes all these arguments for why existing VMWare, Virtual PC or Parallels solutions are detectable, but that wasn't the point of blue pill.All you need is a thin hypervisor layer that provides nearly transparent access to the hardware. As far as the OS is concerned

Your statement "All you need is a thin hypervisor layer that provides nearly transparent access to the hardware." is very funny. It's like "All you need [to cure cancer] is a cure for cancer." Your hypervisor has to protect itself by consuming memory. It has to use CPU resources in order to do this. It has to make sure nothing else can rise to it's privilege level while maintaining complete compatibility with everything on the system. Which means that it has to be able to emulate the ability to run a second

That's simply not how it works. This isn't DOS, and there isn't a simple BIOS call the OS uses to retrieve the current time. Start here: the X86 has a 64 bit timebase register, the TSC, which reports cycle-count time in about 150 cycles directly from the hardware. Joanna tried to virtualize the TSC and found that she couldn't do it reliably under AMD SVM. She had to resort to dynamic code translation, VMware-style, to detect and modify code that probed the TSC. The problem with that approach is left as an e

name one anti virus maker with a VMM detctor. every AV software i have used simply ignores virtual machines, hidden or otherwise.right. a whitepaper about how easy it is to detect virual machine malware is not grandpas av dector popping up a warning about a virtual machine running on his system.

this type of exploit has been known about for many months virus writers take a lot less time than that to come up with working attack vectors. so it would not suprize me in the least to find that hackers are using l

You forget there's two clocks. There's the machine's clock, which you can easily manipulate, but there's also the soundcard clock -- which you can't manipulate without your stuff sounding strange, and then there's the NTP clock, which you can't manipulate at all. And significant difference between all these clocks and a VM is detected. So basically, slowing down everything equally or skewing the local clock isn't an option.

Actually I think you are wrong. It is known to be very difficult to simulate the timing behaviour of a complex CPU as found in modern desktop PCs, because the timing behaviour depends on so many factors. This has previously been a problem for real-time systems because programs on such complex CPUs have very poor timing behaviour in the worst case.But it is also a problem if you are trying to hide a virtual machine, because the complexity of operation creates a sort of "timing fingerprint" that is unique to

The threat model facing rootkits is not end-user computer savvy. It's conventional anti-malware software. The question isn't whether the person sitting at the computer is smart enough to notice a 60% slowdown. It's whether the impact the rootkit has on the system is reliably measurable, either directly or through a side-channel, in a way that can be harnessed by Norton Antivirus. If it is, you lose; your "undetectable rootkit" is now literally a bullet point on the packag

This is really a straw-man. The point is indetectability from within the guest OS (i.e. antivirus or whatever security software is running should not be able to detect it.) There are plenty of attacks that you can use to detect the infection from the outside.

The current commercial vm's don't try to be undetectable. But if a vm was created with the purpose of being undetectable might be a different matter.

It might be possible to create a vm that only visualizes a specific part of a pc. Only hide some memory and disc space, and passing all other parts through to actual hardware. I don't know if it is feasible.

You clearly didn't read the paper, because it doesn't simply describe how "industry standard VMs operate". Garfinkel and Ferrie are talking about fundamental X86 architectural issues that make intercepting hardware accesses and emulating them in software perceptable to code running on the same machine. The Blue Pill VMM rootkit doesn't leave important instructions "unvirtualized", but it has to operate within the X86 memory hierarchy, and so remains detectable.

I'm still convinced that it's possible to make a VM that appaears to software running within as real hardware.

You mean like the Bochs Pentium Emulator [sf.net]? Because - that is pretty much what they do. They emulate the entire computer, processor included. There was one branch that used a Linux module to use real hardware to speed things up (x86 only), but it otherwise fully emulates the computer including all instructions of the emulated processor and the system timer.

Unfortunately, this paper completely misses the point. This paper is not so much about detecting a VM based rootkit so much as it is about detecting VMs in general. The authors argue is that if you detect a VM when you aren't expecting to, you've found a rootkit. Joanna's argument is that in a few years, everything is going to be using VM technology and you won't be able to tell a "good" VM from a "bad" one.

See virtualization-detection-vs-blue-pill [blogspot.com] and her presentation on the subject here [bluepillproject.org]. No one ever said that detecting a virtual machine is impossible. They are saying discriminating between malicious and non-malicious VMs is impossible.

Joanna's argument is that in a few years, everything is going to be using VM technology and you won't be able to tell a "good" VM from a "bad" one.

I fail to see what purpose the average user has for VM technology. Sure, it's great for server systems, and as a developer I find it extremely handy, but if all you do with your computer is read e-mail, browse the web and run MS Word, why would you want a VM?

Depending on how broadly you wish to interpret what a VM is, you could consider stuff like Apple's Rosetta a virtual machine. It's pretty regular that people around here call for MS to use virtualization to provide an avenue for them to ditch a lot of the backwards compatibility cruft that's causing many of their issues.

These things aren't exactly like running a whole OS in visualization, but some of the same technology is used, and I could see possibilities for using hardware VT support.

What's an emulator and what's a true VM is somewhat blurry. For instance, if my understanding is right, VirtualPC emulates instructions that are executed in ring 0. But most people would still call it a virtual machine monitor.There are other things, like the Java Virtual Machine, that are also in some sense an "emulator" -- but it's emulating a machine that runs Java bytecode, so it counts as a virtual machine. Similar for Rosetta.

If my understanding is right, Rosetta also uses the same dynamic translation

I fail to see what purpose the average user has for VM technology. Sure, it's great for server systems, and as a developer I find it extremely handy, but if all you do with your computer is read e-mail, browse the web and run MS Word, why would you want a VM?

Lots of reasons: fault isolation (e.g. jail() on steroids); compatibility isolation (e.g. while most of my system runs the newest version, I keep my old apps running in a VM with an older kernel); hardware interoperability isolation (e.g. this bit of ha

Unless you believe that the computing industry is going to suddenly embrace virtualisation as an integrated part of the everyday computing experience

And there, AC, is where you hit the nail on the head. The computing industry is doing exactly that. Not suddenly, but as a growing process of introducing virtualisation into the desktop. Grandma will never know that her desktop OS uses VM techniques, any more than she knows that it uses virtual memory, or kernel/user modes, or memory-mapped I/O.

I am currently wondering if I should use VM to make environments for all those remote environments I have to work with. Every one asks for a different toolset, and sometimes they interfere. So having just a VM for each environment with exactly the tools needed would save me much hassle.

Microsoft's new research operating system "singularity" http://research.microsoft.com/os/singularity/ [microsoft.com] runs every process in its own virtual machine. This way, if an attacker breaks your email client, it's MUCH more of a pain in the ass to get to the word documents.

No it doesn't, at least not in the way we're talking about here. Processes are run under a modified.NET runtime, so they don't have direct access to hardware, but we're talking about virtualized hardware that looks like the real computer here, which a substantially different affair.

if all you do with your computer is read e-mail, browse the web and run MS Word, why would you want a VM?

Because the software that "average users" run, tends to be written very quickly instead of carefully. You explicitly mentioned MS Word! You just mentioned email and web browsing too, where th most popular applications have repeated histories of bugs that allow them to treat supposedly-harmless data as executable code. Hell yes those should be sandboxed to contain destruction. Maybe VMs aren't the b

Be fair: the only researcher saying that "hypervisors can be detected, but rootkits can't" is Joanna. The rest of us, from what I can see, agree: you might not be able to detect Blue Pill by name, but you can detect unauthorized virtualization, even if you're already legitimately virtualized. Currently, the only source of unauthorized virtualization? Blue Pill.

Also, I think Joanna was really trying to hammer the point that this was an arms race (much like signature detection vs malware at the moment). For every detection technology, there is an evasive technology to get past it, and the same is true in reverse. Detecting the current ways vrootkits are implemented really doesn't get you much in the long run.

You have a point, but don't forget that I may choose to be running Red Pill (TM). Red Pill is MY virtualization software, run for MY reasons. As part of its startup, Red Pill does an extensive set of "metal checks" to make sure it's running on real metal, not on some Blue Pill. Lest some Blue Pill do "clock leveling" and make machine performance consistent, but at a lower level, Red Pill has had me input hardware configuration data, so it knows what the clock (and other aspects of the system) really ough

* Undetectability based on current technology and the fact that nothing about a given vector of attack has been defined or studied in depth yet. Claim subject to change once the phenomenon has been studied, quantified, and dissected in a rational, forensic manner.

Translation: You can't detect it because you aren't looking for it (yet).

Translation 2: This new attack can't be defeated because nobody's tried yet!

That's what so many of these "security researchers" and pretty much ALL of the tech-press forgets.

Like any other system security compromise, the amount of time these things remain "compromising" depends largely on how long it takes to define it.

Detecting the simulator requires knowledge about the simulator and the outside world. If you've always been on the inside, you wouldn't know where to look. The majority of software is not designed to know if it's living in a simulated machine (in fact, that's one of the principles of computer architecture), and maybe it's similarly true of humans.

Which was kind of where I was going. Quantum weirdness and the speed limit on the transmission of information both make me think of the way cellular automata function.I was listining to a podcast the other day (Escape Pod - scifi stories), and the story was about a guy that learns that his world is in a simulator, but there are bugs, especially found in an on-line game. You can make objects leave the game and appear in the "real world".

I keep wondering what sort of programming artifacts (bugs) could exist, how could we find them, and how could we exploit them...

I think that many of the bugs have been patched. For example many cultures remember a time when magic worked, enough people thinking of something with enough concentration could make it real. Some tweaks to the optimisation between the objective reality and our subjective selves sorted that out.
There may be some bugs though, how often are inventions discovered by the same peop

Itanium runs x86 instructions through pure software emulationTransmeta transcodes source instructions into its native codeNew versions of Intel and AMD processors and motherboards most probably will not have the same instruction timings or emulate undocumented aspects of current hardware and softwareNew hardware-based virtualization techniques may not change CPU performance much and can allow guest OS direct access to selected hardwareThe bottom line is that VM detectors can only reliably fingerprint hardwa

Yeah, therefore the point would be to establish a very detailed baseline for a specific system. That way, you can analyze the exact clock skew between the sound chip and the RTC, timings for specific instructions, etc. Then, it should be possible to detect whether you are suddenly in a VM jail. To detect the jail without ever having seen anything else, that's far harder...

Amazing how much money your department of civilian oppression can waste on unrelated research. Yes that is right, if you RTFA, the last paragraph discloses their funding from DHS. Their subject is a noble course, but what does it have to do with the terrorists DHS were supposed to find? Or did they broaden their scope to include romanian hackers looking to make a buck?Another concern is that this study is presented by those companies that have a stake in spreading positive news about it. And tadaa: the news

Yes _I_ did read it and I don't wonder if you did so yourself.It seems I caught a nice rightwing coward that has to satisfy his righteous feelings everytime he encounters something that he thinks does not toe the party line. Anonymity + Audience => gibbering fucktard. There are other countries in the world, you know, where critical thinking is encouraged. Sadly you can't see the good points in that. But you would prefer armed citizens that can't think for themselves, don't you?Now again: why does the DHS

Look. Virtualization is not a security technology. I've gotten a VMWare engineer to admit this publicly, on stage, with only mild needling. Virtualization reduces hardware to a protocol that must be parsed, or (as is increasingly common) it allows direct passthrough to devices on buses that have no conception of host vs. guest (see: USB).

There was actually some really cool work recently done by Jeff Forristal, who pointed out that since all VM's are on the same LAN, all the old LAN-based attacks work really well cross-VM. Oops.

Now, regarding Joanna's attack, she's completely right that everyone's going to virtualization -- it's just so much more manageable. The consumer market will eventually embrace this.

I think you mean "Pony Engine" security;
http://en.wikipedia.org/wiki/The_Little_Engine_That_Could [wikipedia.org]
Seriously though, I've read the paper and it's conclusion is fundamentally flawed. The summary is equivalent to;
"All current Virtual Machines are detectable; therefore Virtual Machines will always be detectable".
That statement is quite plainly wrong.
Just as most encryption schemes are broken in theory long before an actual exploit can be constructed, so too a VM can be trivially demonstrated to work in

I think the fact that a detection mechanism can be found for each vm rootkit is very plausible. However, won't rootkits always find a way to circomvent the detection mechanisms? In that case, we'll probably end up in a new hacker - security war with hackers tweaking vm's to bypass detection and security folks who keep finding new detection mechanisms.
While the article clearly indicates that finding detection mechanisms is much easier than finding ways to bypass or fool the detection mechanism, it doesn't

A properly-created virtual machine ought to be absolutely undetectable from withinside. The simple fact is that all commercial offerings to date haven't tried to be undetectable.

If you lock a person in a windowless room where the only "access to the outside world" is a TV set where you control all the programmes, you essentially control everything they know about the outside world; and you then can make that person believe anything you want them to believe. You could even cause them to think night was day, if their only reference was the continuity announcer's time checks (and/or you could give them a special watch which displayed your manipulated version of the time). But if you accidentally or deliberately let, say, BBC1 get through unaltered, you aren't controlling everything they see; and by comparing the news on the real BBC1 with your altered news on the other stations, they could ascertain that something was amiss.

If your virtualised environment behaves absolutely "correctly" with respect to undocumented instructions and the like (i.e. they aren't trapped and made to do something specific to your virtualisation application), and all I/O channels are properly manipulated (to the point where even the scan line count on the graphics card is adjusted to account for the slowdown in the virtual environment), then it's undetectable from withinside. If, however, even one undocumented instruction does not behave exactly as the real processor, or even one I/O channel is left unmunged, then there is a potential way the virtual environment could be detected.

Of course, all that manipulation of stuff is bound to impose some kind of overhead, so a truly undetectable VM might end up being slow as hell..... but on the inside, you don't know it's slow, precisely because you've been fed misinformation about the time things are taking. And processors are getting faster. They used to think that chop-and-swap analogue TV encryption would never be trivially crackable in practice.....

If, however, even one undocumented instruction does not behave exactly as the real processor, or even one I/O channel is left unmunged, then there is a potential way the virtual environment could be detected.

Malware hosting doesn't have to be perfect and hide its presence in every possible way. It just has to hide in the ways that the market-leading malware detectors use. A malware author can just set up a test system and each time the detector finds a hit, track it down and emulate around it. As you sugges

You could even cause them to think night was day, if their only reference was the continuity announcer's time checks (and/or you could give them a special watch which displayed your manipulated version of the time)

Of course, by the form of your argument you have presented the weakness of your argument. All you need to test the "prisoner hypothesis" is an independent clock. Every processor, every VM, every rootkit is subject to timing tests.

No it's not. Remember, you can control how many clock cycles the program on the inside thinks have elapsed. So even if it does manage successfully to ask someone else the time (by some method that would slip past your "blue pencil"), it won't have any reason to doubt the answer that comes back.

Yes, but you can compare the local CPU clock with external clocks. If the CPU claims that the timing test you execute took 2 seconds, but 20 seconds have elapsed according to an external clock, then you know something is amiss.

The external clock doesn't even have to be accessed directly. The testing app could run a test and ask the user if it seemed to take 2 seconds or 20 seconds. I don't think a CPU can skew a human's perception of time...

That's the point: you can persuade the program inside the virtual environment that 20 seconds really have elapsed. Because the only way it can find out what its own clock speed is, is to run some sort of timing loop which lasts for a known number of clock cycles; and then check that against some internal clock on the motherboard (using a known method). And the only way it can access that internal clock, is via your virtualisation layer (so you can alter the information in transit). So when it goes off

You can persuade the program, but you can't persuade the user. The program just needs to ask the user "did this test take 2 seconds or 20 seconds?" The user will know the difference between a 2s test and a 20s test. Of course, if you ask them to distinguish between 2 seconds and 4 seconds, they might have a harder time. And tests like that obviously don't help at all if you don't have a flesh-and-blood user to ask.

In my universe we have this thing called NTP that does exactly what you claim is impossible. We also have a computer science concept called "looping" that allows us to measure many small-duration events in order to analyze them statistically.

Folks, this is the Halting Problem [wikipedia.org]. If you have a foolproof method of detecting that you’re running in a VM, you can build a special-purpose VM that watches for that method specifically to defeat it.

Similarly, you can’t ever rule out the possibility that you yourself are living in a Matrix-style (etc.) simulated world. You might be able to detect that you are under certain circumstances, but any sufficiently advanced simulation is indistinguishable from reality. No, really!

Oh — and all this applies equally to any supposedly “omnipotent” deities you might care to propose. After all, if “God” could trap “The Devil” (to pick the current favorite pair of arch-rival gods) in a simulated world such that The Devil thought that he (The Devil) was the all-powerful creator of life, the universe, and everything... then God has no way of knowing that The Devil hasn’t done the same to him. And if God doesn’t have any foolproof way of knowing whether or not The Devil has him trapped, and if he himself has no foolproof way of trapping The Devil, it hardly makes any kind of sense to describe God as “all-powerful,” now, does it?

Practical rebuttal:THe halting problem involves metalanguage in a sense. THe real program running ath the top level that analyzes itself, is root. THe program being analyzed is described, thus the root program is the "metaprogram".

THey invented teh term "metalanguage" to get aroudn this exact fallacy.

Now, assuming that turing's program can detect, with absolute certainty, whether or not a certain program would halt or not, it would, by necessity, need to be aware of its own decision on whether or not to t

All is fine and dandy, but what if you already run your system in a VM? What if a rootkit injects itself as another virtualization layer (at either side of your good VM)? How do you detect this sort of thing?

Presumably with virtualization detection inside your good VM. Each layer of the onion needs to detect whether it is virtualized and whether or not it is OK with that.

But really, you only need the layer expecting to run on the hardware to be able to detect anyone virtualizing it instead.

Unless having the hardware running a trusted VM running an untrusted VM running the applications that are fine with being virtualized by the trusted VM--a sort of VM-in-the-middle using another rootkit exploit on the truste

It looks like they're just talking about detecting if you're virtualized or not. So perhaps some of these techniques could be used by user-hostile software publishers (i.e. you're not allowed to run our server in a VM without getting a special (i.e. more expensive) license, or you're not allowed to run our media player unless we know it is directly accessing the display hardare) but I don't see how this gives any rootkit-detection advantages.

VMWare is virtualization software, not emulation software. It runs pretty close to native speed, depending on what you run on it. Comparing it to bochs is just stupid, that's a full blown emulator. A VM still uses your processor natively to decode the majority of instructions, it just catches the privileged ones, that otherwise would make your OS go boom. (Simply put)

As you say, a real VM does execute instructions directly and either traps memory calls in hardware or traps all the system calls or both, it's not emulation. Stacking one virtual machine inside another one is quite thinkable since even two steps down the machine is still executing native instructions not emulating them so the speed loss is not multaplicativeFor that reason I fully expect in the not too distant future that we will have virtual machines running inside virtual machines. That is there will be

VMWare is virtualization software, not emulation software. It runs pretty close to native speed, depending on what you run on it. Comparing it to bochs is just stupid, that's a full blown emulator. A VM still uses your processor natively to decode the majority of instructions, it just catches the privileged ones, that otherwise would make your OS go boom. (Simply put)

I had to port this major banking application to VMWare ESX (in a VM running Windows 2003). I have to agree with your "runs pretty close to native speed, depending on what you run on it" comment. My only beef is that, "depending on what you run on it" is extremely limited.

On a native machine, we achieved about 55-70 transactions per second, after that, the CPU of the machine was maxed out. This was a quad Xeon with about 16 gigs of ram. The same exact machine, running ESX host, and one single VM, one, our Windows 2003 server, was able to achieve about 2-5 transactions per second before the host throwing in the towel. Now I am sure ESX 3 will be faster. This wasn't ESX 3, was 2.something.

What I noticed was that:- VMWare has a lot of trouble with applications who do a lot of context switches. Basically, object pools with significant usage. If the CPU has to swap from thread to thread, it kills VMWare.- We did a few network tests with bizarre results like VM network latency being 50% more. This is a killer with any system remotely trying to get a decent transactions per secon. We had to de-virtualize our SQL server and SNA gateway, it wasn't able to hold the load.- For some odd reasons MOM, anti-viruses and SMS can choke a host without any problems. My hypothesis is that missed file cache is brutal for VMWare, especially if other VMs are doing some I/O intensive stuff.

I wouldn't recommend anyone putting a server with moderate to high load as a VM. However, VMWare is awesome for very low load server, we can pack 6-10 of these servers easily on the same dual dual core Xeon. And could probably more.

kinda sounds like rainman. VMs...definately faster than 10%. definitely.I don't know man, I've been playing with VMware server running on a really modest 2.4Ghz P4 with 2GB of ram and I've been pretty impressed with it's speed. I'm sure there are some tasks will make the machine take a real beating making the lag more pronounced but I typically have 3-4(lately CentOS, WinXP and a pair of Win2003) guests with the XP host subtituting as my Windows desktop using a really basic Ubuntu install as the host. It ac