everytime I refresh to see how many BTC I should have, it lowers, why is that?

Holy f%^#% I just made myself lose like 0.01 BTC...

That is happening because other people in that time between refreshes have submitted their shares...which increases their earnings and subsequently decreases your by a small amount.Likewise, you will notice that right after you submit a share, your earnings will have increased by a little bit.

If people with CPU miners are unsure if they should join this pool — i.e., if they can live up to 1% efficiency — here are some numbers from the fastest of my CPU miners (and you will see it's pretty slow):

I'm running jgarzik's cpuminer single-threaded with the sse2_64 algorithm on a Core2Duo E6550 @ 2.33 GHz and a scantime of 7 seconds, which gives it a speed of 2,452 ± 272 khash/s (p=.95, based on 56,533 getworks).

Now, out of those 56,533 getworks the miner has delivered 224 POWs. Ladies and gentlemen, that's a whopping efficiency of 0.396% (and so this miner doesn't qualify)!

That said, I do like the share-based payout system…

Cheers,

OK, we made a change. Auto-banning is turned off for now. CPU miners are safe.

We will no longer look at the 1% Efficiency as a means of banning people. Instead, if you miner make more than 30 getwork request AND has not submitted a share in 60 seconds, you'll be temporarily banned for 5 minutes. If your miner is getting a new getwork every 5 to 10 seconds AND has not submitted a share in 60 seconds, you'll be OK.

We are trying to prevent getwork flooding or as some have called it the "fire hose" method for trying to find shares, because it's horribly inefficient and ultimately cuts the CPU miners short. Working on your getwork for more than 5 or 10 seconds is preferred, but you probably don't want to work on the same getwork for more than the current average time it take to find a block (based on a average of the last 100 or 1000 blocks).

And I do not think ufasoft's miner works, at least not for me... what SSE2 miners work?

Honestly I'm not sure. After spending $45 on 64 CPU's from Amazon's EC2 cloud for only 24 hours, and getting almost nothing in return, I abandoned all the CPU's miner....personally speaking. I spent the money, got a couple GPU's, and the entire time I was testing the pool I used GPU's. Geebus may have tested some CPUs.

So any and all feedback from the CPU's miners out there is a BIG help for us. Thank you in advance.

For a good basis of comparison, the original poclbm sits at around 17% efficiency on my cards, with a 10s askrate.

While I've always felt more comfortable with iterating the full getwork, to the point of having done my own opencl miner integrated on bitcoind just for that, the mathematics you are describing are really a flawed kind of statistical proof. Yes, you do get a much better coverage of the getwork (which is what you call efficiency) and the original miner just throws away the rest of the work once a solution is found, so it is never as "efficient". But that doesn't mean it is slower or worst, as we are basically trying to find a solution on a random pool of numbers.

So you count the number of solutions vs the number of value pools, whereas the original miner counts the number of value pools with at least one solution. In the end I would argue the success rate will be larger for your miner because the network lag is smaller, as the miner doesn't request a new work set for each solution found, but that's about it. The 17% vs 90% description is really misleading!

For a good basis of comparison, the original poclbm sits at around 17% efficiency on my cards, with a 10s askrate.

While I've always felt more comfortable with iterating the full getwork, to the point of having done my own opencl miner integrated on bitcoind just for that, the mathematics you are describing are really a flawed kind of statistical proof. Yes, you do get a much better coverage of the getwork (which is what you call efficiency) and the original miner just throws away the rest of the work once a solution is found, so it is never as "efficient". But that doesn't mean it is slower or worst, as we are basically trying to find a solution on a random pool of numbers.

You are correct, it is not slower or faster in finding shares. Our focus was simply to reduce the amount of resources (bandwidth and CPU time) on the server. That's what we're focused on; reducing the amount of bandwidth and resources used by and for the server.

Quote

So you count the number of solutions vs the number of value pools, whereas the original miner counts the number of value pools with at least one solution. In the end I would argue the success rate will be larger for your miner because the network lag is smaller, as the miner doesn't request a new work set for each solution found, but that's about it.

I don't think that our modified miner finds more shares or blocks. All the tests I've done show about the same number of shares. We're hoping that by having more miner's that can get the same number of shares in a given time and reduces the load on the server, we'll be able to support more users. We're also hoping that more users means more blocks solved.

Quote

The 17% vs 90% description is really misleading!

I don't believe so. Perhaps our description of what we mean by "efficiency" was a bit misunderstood or lacked a sufficient explanation. Sorry for the misunderstanding.

I don't believe so. Perhaps our description of what we mean by "efficiency" was a bit misunderstood or lacked a sufficient explanation. Sorry for the misunderstanding.

Well, it is worth what it is worth. I'm now using your miner exclusively, so there you go

Don't let my word be taken the wrong way, you've done an amazing work. Not only did you get the miner to search the full key space, which is the way I like it, you also give out very helpful information while mining, something that was really lacking. Kudos to you, and btc's too once this miner renders me some

There is no guarantee that a solution exists in a 'getwork' data unit.

True. Then again, some getwork will have 2, 3, 4, 5, 6, or more answers. The most I've seen is 7 answers being found in 1 getwork request. The more time you give to a sample period, the closer you get to a 1:1 ratio of [shares submitted]:[getwork request].

Quote

Better to rename "efficiency" as "luck." It is more clear.

No, cause the whole process of mining is based around "luck".

When we talk about "efficiency", we're talking about how much bandwidth and resources the miner is causing the server to use. Our goal was to reduce the amount of used resources to a minimum while making sure the client isn't negatively impacted.