Yes, there are some N-Body tasks that are multi-thread meaning they will use all of the threads that are available. In BOINC Manager, the application name is "Milkyway@Home N-Body Simulation 1.40 (mt)". If you want to restrict the number of threads they use, here is a post by Jacob Klein on using an app_config.xml to control the number of CPU's that an n-body will use. It is what I use.

I have an N-Body 1.40 (mt) running on my laptop (id 563836), task id 758776399 (or 758776398), The laptop has two AMD cores. It's been running for 123 hours, and has about 8 hours left (it's at 93.998%). So, that's 5.5 days, total. I thought it might be stuck at first, but it makes percent progress. The CPU estimate was grossly low, and is low by an hour even still. It's due date is 6/15. I have another in the queue, not yet started, also due 6/15. It's not a sure thing that it could finish by the deadline.

I expect really poor credit from this, as the estimate was so dreadfully wrong. But we'll see.

I also have no idea why it shouldn't take more like twice the wall clock or so of my 4 core desktops.

I've heard in other forums of bad estimates due to BOINC using a CPU benchmark for an essentially FPU intensive app, or the other way around. Obviously, the best benchmark would be one where after a half hour or so, the current unit's progress would be used for the estimate. And, previous units of the exact same type would be used over BOINC's guess. I have no idea if this is in the control of Milky Way or BOINC, however.

I've allowed a 4 core desktop to do some units. Usually, Milky Way prevents my GPU from running. But Collatz is currently down, and some have finished. For example, 2,706.26 CPU seconds finishes in 739.62 seconds, which is like 12 minutes. but i get 27 credit, which is about a quarter of what i get for similar CPU time units. Perhaps the credit is based on wall clock instead of CPU time.

I`ve got 2 runs of these tasks each lasting for about 30hours on phenom II x4 3.2ghz(with 85% load), while predicted runningtime was 10 times smaller, is it ok? [It`s the biggest runtime i`ve got here ever.]

I have fixed the underlying issue. It was necessary that these runs be up for this period of time for the science. This issue will not occur in the future. Expect to stop getting problematic workunits in the next week.

The workunits take vastly different amounts of time to complete. This is a problem that we at MW@Home have been working on to assign appropriate credit to crunchers. Our goal is to ultimately perfect this art and prevent the assignment of non-useful simulations that are time-expensive. You are right to say that for the same simulation, the wall clock time on your 4 core machine should be half that of your dual AMD cores. I can go into more detail. If you send me a private message, I would be glad to explain the science of how the workunits are very difficult to assess computationally. I hope that I can answer any questions you may have.

The workunits take vastly different amounts of time to complete. This is a problem that we at MW@Home have been working on to assign appropriate credit to crunchers. Our goal is to ultimately perfect this art and prevent the assignment of non-useful simulations that are time-expensive. You are right to say that for the same simulation, the wall clock time on your 4 core machine should be half that of your dual AMD cores. I can go into more detail. If you send me a private message, I would be glad to explain the science of how the workunits are very difficult to assess computationally. I hope that I can answer any questions you may have.

Jake

Have you talked with David Anderson about CreditNew's crediting of MT tasks? I mean *really* talked to him, challenging his position with facts, rather than just receiving the standard speech that it works?