I can't figure this out. I get how the system works as a whole from a programming and infrastructure standpoint but the math and progress parts are still fuzzy.

So it's a log/database structures in a chain of blocks. Each block contains some sort of data linking it to all previous blocks. The chain contains the entire transaction history of the system. The blocks adjust their difficulty based on amount of people processing them to maintain a 10 minute completion interval per block. All transactions are somehow tied into whatever the current block is and once it's done processing, tada, it's in the log/block chain/database. The calculations are seriously complicated so that no one person or network of computers could re-process the entire chain to make fraudulent transactions.

So assuming that's all accurate, if I set it on solo and start working on a block and stop working on it before it's solved, that doesn't benefit the system in any way, right? In fact, that really brings up tons more questions about soloing. How could that possibly work? I take it I'm not just grabbing a fresh block and saying "I'll handle this one" and then taking like months to finish it and whatever poor guy had his transaction included in that block isn't gonna have it go through until it's done. That wouldn't work real well. And why don't transactions take 10 minutes to complete? I heard they're close to instant. But if your transaction has to be put in a block, completed, and have that block added to the chain before the transaction is official, wouldn't that take 10 or less minutes randomly? And if there's only ever one block being processed at a time every 10 minutes and completion results in no more than 50 coins, everyone is working on the same block, right? Cuz I thought pools and soloers get their own block. If the resulting payout is in fact 50/block, do pools and soloers just take turns or what? Or the first pool to solve it gets 50 and nuts to the other who were working on it?

I think your confusion is that you think that first the next block is decided, then people work on it. It's the other way around, every miner decides for himself what "block candidate" to work on. He decides what transactions, among the floating transactions he knows, will be included in it. Since everyone should know all transactions, the sender doesn't rely on any single miner for his transaction to be included. The miner constructs the header to work on based on the Merkle root of transactions, the hash of the previous block and so on, and starts hashing with different nonces until he find a hash satisfying the difficulty requirement. Only when he finds it he broadcasts it as the next valid block. If someone else beat him to finding a valid hash, the next block would be different (but would still include more or less the same transactions).

Since it's random, there's no "progress" towards finding a block which the miners need to synchronize about.

So assuming that's all accurate, if I set it on solo and start working on a block and stop working on it before it's solved, that doesn't benefit the system in any way, right?

It turns out not to have benefited, but you didn't know that before you tried. It's like asking if buying fire insurance is no benefit if you don't have a fire.

Quote

In fact, that really brings up tons more questions about soloing. How could that possibly work? I take it I'm not just grabbing a fresh block and saying "I'll handle this one" and then taking like months to finish it and whatever poor guy had his transaction included in that block isn't gonna have it go through until it's done. That wouldn't work real well.

No, everyone tries to generate the next block. If you do it first, you win. (Oversimplifying a bit.)

Quote

And why don't transactions take 10 minutes to complete? I heard they're close to instant. But if your transaction has to be put in a block, completed, and have that block added to the chain before the transaction is official, wouldn't that take 10 or less minutes randomly?

It depends on your definition of 'complete'. You might consider the transaction complete as soon as it has been accepted by the network even if it hasn't gotten into a block yet on the reasonable belief that it will get into a block shortly. Strictly speaking, certainty is never possible, but then the Earth could get blown up in five minutes.

Quote

And if there's only ever one block being processed at a time every 10 minutes and completion results in no more than 50 coins, everyone is working on the same block, right? Cuz I thought pools and soloers get their own block. If the resulting payout is in fact 50/block, do pools and soloers just take turns or what? Or the first pool to solve it gets 50 and nuts to the other who were working on it?

Everyone tries to generate the next block. Each miner has their own 'skeleton' that they're trying to turn into the next block to avoid wasted effort. Whoever does so first (again, oversimplifying a bit) wins, their block becomes part of the public hash chain, and they get 50 bitcoins.

Really no effort is wasted. Even if you fail to find a block, the statistical difficulty you added to the public hash chain makes it that much more difficult for anyone else to launch a double-spending attack. To make such an attack work, the attacker must generate blocks faster than the rest of the world combined. Every additional miner makes the attack that much harder.

I am an employee of Ripple. Follow me on Twitter @JoelKatz1Joe1Katzci1rFcsr9HH7SLuHVnDy2aihZ BM-NBM3FRExVJSJJamV9ccgyWvQfratUHgN

Aha, I see. So the also horribly oversimplified version is basically the network broadcasts a difficulty in the form of a range of hashes that would work. Like find a hash lower than 00000000000004a5601c621798d1da9d48b203c87a31f2fb0bd53af8e6ca312b and your block wins.

Everyone starts working on a block whenever they hit go in their miner and whether they stop or start, it's the same theoretical block they're working on. Once a random value gets turned into a hash that meets the requirements, they win 50 bitcoins + any transaction fees and I assume the transactions are officially added to the block once it's established that it's a completed block.

So then the 10 minute interval is just probability based and accurate due to volume of hash tries on the network as a whole? But theoretically, two people could come up with a completed block inside 10 minutes or everyone could take longer than 10 minutes, right?

Assuming that's all correct, I'm gonna go out on a limb and assume nobody created a rainbow table for this size of base nonces, right? Cuz otherwise it sounds pretty secure since they're one way and you couldn't rig a client to purposely generate a lower hash in say 1 try since the resulting has from any value is random.

But what's to stop someone from generating random hashes like everyone else and then when they find a hash that's REALLY low, they'll tell their client to sit on that value and not "turn it in" yet. Then years from now, when the hash range gets lower, they'll turn in a bunch of super low ones in a row and throw off the coin generation timing.

Aha, I see. So the also horribly oversimplified version is basically the network broadcasts a difficulty in the form of a range of hashes that would work. Like find a hash lower than 00000000000004a5601c621798d1da9d48b203c87a31f2fb0bd53af8e6ca312b and your block wins.

Everyone starts working on a block whenever they hit go in their miner and whether they stop or start, it's the same theoretical block they're working on. Once a random value gets turned into a hash that meets the requirements, they win 50 bitcoins + any transaction fees and I assume the transactions are officially added to the block once it's established that it's a completed block.

So then the 10 minute interval is just probability based and accurate due to volume of hash tries on the network as a whole? But theoretically, two people could come up with a completed block inside 10 minutes or everyone could take longer than 10 minutes, right?

Right. But every block references the last block. Once a block is found and broadcast, people will start referencing it in their new blocks.

But what's to stop someone from generating random hashes like everyone else and then when they find a hash that's REALLY low, they'll tell their client to sit on that value and not "turn it in" yet. Then years from now, when the hash range gets lower, they'll turn in a bunch of super low ones in a row and throw off the coin generation timing.

Every block references the previous block, and the longest chain is considered the valid one. If a new block is broadcast which references a very old block it will not be part of the longest chain and have no influence.

But what's to stop someone from generating random hashes like everyone else and then when they find a hash that's REALLY low, they'll tell their client to sit on that value and not "turn it in" yet. Then years from now, when the hash range gets lower, they'll turn in a bunch of super low ones in a row and throw off the coin generation timing.

Because it won't be valid years from now. If they're working on block 135,023, years from now we'll be working on block 835,971. A truly awesome block 135,025 won't do us any good.

I think you're missing a very important detail -- the whole point of all this work is that it is done after all the previous blocks in the chain. The purpose of all this computing power is to pile that computing power on top of the transactions so that an attacker would have to do all those computations himself to undo a transaction.

The miner is working to create a particular block with a particular previous block and a particular set of transactions.

The genesis block, by the way, contains an excerpt from a current news story just to prove that Satoshi wasn't working on a longer chain for months already. (To prove that the very first block wasn't actually created long before it was claimed to have been. For all other blocks, they link to the previous block so it's not an issue.)

I am an employee of Ripple. Follow me on Twitter @JoelKatz1Joe1Katzci1rFcsr9HH7SLuHVnDy2aihZ BM-NBM3FRExVJSJJamV9ccgyWvQfratUHgN

okay, I think I've got this 95% figured out then I thought it was just the random numbers alone that people were hashing and didn't know that it was actual data from the last block. So then logically it has to be both. One piece of data from the last block would hash into one hash and that's that so I take it it somehow appends a random number to the former block's data and then THAT's what it hashes. So last block's data is "abcdefg!" so your miner tries abcdefg!1982374092183 and if that hash is below 0000000xxxxxxxxxxxxxxxxxxxxxxxxx first then your block is the complete one for that round. If it isn't, it tries abcdefg!065034578923473247 then when it thinks it has a correct number, it submits it and many many many people verify that the block contains the last block's code and the random number you came up with and the resulting hash qualifies and that's that.

So that means that the pool that didn't win has to drop the block they were attempting to form and start all over again? That sucks lol. Just when I thought it couldn't get less power efficient lol.

So that means that the pool that didn't win has to drop the block they were attempting to form and start all over again? That sucks lol. Just when I thought it couldn't get less power efficient lol.

No... Or else smaller pools wouldn't survive. Deepbit solves blocks in a few minutes for example... But I don't know the exact details of why it doesn't happen Maybe the network assigns different jobs to different pools/users?

So that means that the pool that didn't win has to drop the block they were attempting to form and start all over again?

There's no "start over". There's no progress towards finding a block. It's random. A pool has, say, 0.01% chance of finding a block every second whether it's working on a new block or on the same block for hours.

That sucks lol. Just when I thought it couldn't get less power efficient lol.

Power efficiency has nothing to do with it. The system is designed to be stable as long as no single entity controls >50% of the hashing capacity. Whatever amount of hashing needs to be done (and hence power consumption) to ensure this is the amount that will be required. Everything else is implementation details.

So that means that the pool that didn't win has to drop the block they were attempting to form and start all over again? That sucks lol. Just when I thought it couldn't get less power efficient lol.

No... Or else smaller pools wouldn't survive. Deepbit solves blocks in a few minutes for example... But I don't know the exact details of why it doesn't happen Maybe the network assigns different jobs to different pools/users?

It's not a race. In a race, whether you win or lose depends on what the others do. But here, for a given difficulty, the chance of finding blocks is completely independent of how many blocks others find.

I think the question was: since you need the previous block to find the hash for the next one, isn't everyone trying to find the same block at each moment?

If the last block is #136352, everyone is trying to find block #136353. But everyone tries to find a different block #136353.

The network doesn't "assign" jobs, everyone composes his candidate himself as long as it follows the rules. The most important variable is the receiving address of the generation transaction, everyone uses one of his own for this.

I think the question was: since you need the previous block to find the hash for the next one, isn't everyone trying to find the same block at each moment?

That and a whole lot of others now. This is certainly different than I imagined after I read awfully close to 100% of the documenation in various places and I'm a programmer with a math tutoring background Can anyone explain the exact mining process in overly-simplified, almost cartoonish objects?

The chain is a straight line of blocks, right?

You say if the last block is #136352, everyone is trying to find block #136353. And "find" means take data from the last block and mix it with a different random value every attempt and then hash it. So they all start chugging away and if the hash is below [insert low hash value based on the current difficulty rating here] then you made block 136353. Other people verify it and tada, 50 bitcoins + transaction commissions for everyone involved proportionate to the work they did. But then the current block is incremented to #136353 which was just created and every pool has to hash based on data from that new block, which means dumping the old block's calculations but it's not detrimental because the probability of finishing first remains the same as it was for the last block.

Which sort of suggests something unbelievably unwise about how this system works, not to mention a massive vulnerability but that's more for another post

I think the question was: since you need the previous block to find the hash for the next one, isn't everyone trying to find the same block at each moment?

That and a whole lot of others now. This is certainly different than I imagined after I read awfully close to 100% of the documenation in various places and I'm a programmer with a math tutoring background Can anyone explain the exact mining process in overly-simplified, almost cartoonish objects?

The chain is a straight line of blocks, right?

You say if the last block is #136352, everyone is trying to find block #136353. And "find" means take data from the last block and mix it with a different random value every attempt and then hash it. So they all start chugging away and if the hash is below [insert low hash value based on the current difficulty rating here] then you made block 136353. Other people verify it and tada, 50 bitcoins + transaction commissions for everyone involved proportionate to the work they did. But then the current block is incremented to #136353 which was just created and every pool has to hash based on data from that new block, which means dumping the old block's calculations but it's not detrimental because the probability of finishing first remains the same as it was for the last block.

Sounds about right. Don't forget everyone also chooses what transactions to include, puts them in a Merkle tree and uses its root as part of the hashed block header.

Someone could just write a rigged client that would not generate random values and instead use sequential values so none are ever tried twice for any given block, giving them a ridiculously large advantage over everyone else. If you're wondering how I got to that conclusion, see my other post.

EDIT: oh, apparently they all do that despite lots of reports that it's trying "random" values.

Someone could just write a rigged client that would not generate random values and instead use sequential values so none are ever tried twice for any given block, giving them a ridiculously large advantage over everyone else. If you're wondering how I got to that conclusion, see my other post.

Someone could just write a rigged client that would not generate random values and instead use sequential values so none are ever tried twice for any given block, giving them a ridiculously large advantage over everyone else. If you're wondering how I got to that conclusion, see my other post.

There's no difference between these two things. The order in which numbers are tried makes no difference. If you try 500,000 random numbers or 500,000 sequential numbers, you still have precisely the same chance of finding a block on each one, assuming you don't repeat. That something is "random" doesn't mean it can't have other properties such as not repeating. Otherwise, it would be impossible to put a deck of cards in a random order, since the same card never appears twice.

I am an employee of Ripple. Follow me on Twitter @JoelKatz1Joe1Katzci1rFcsr9HH7SLuHVnDy2aihZ BM-NBM3FRExVJSJJamV9ccgyWvQfratUHgN