Pages:
Author

Topic: Decentralized mining protocol standard: getblocktemplate (ASIC ready!) - page 3. (Read 32344 times)

legendary
Activity: 2730
Merit: 1034
Needs more jiggawatts
I missed something about SatoshiDice. If they always make sure that the output of your betting transaction is one of the inputs for the transaction paying out your winnings then you can't both get the winnings and your bet back. But you can still take your coins back if you lose. So you'd win almost every bet, except when the block for your Finney attack is orphaned.

Anyway, my point was that if you can't create an automated way to make Bitcoin/mining safer by use of the transaction data while mining, then it makes no sense for it to be mandatory to push all that redundant data back and forth.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
...
So yeah - NO one is gonna make much money out of that.

Actually, if you look at the SD txns, you'll see that many people pull it off all the time.  I still have zero confirmation payouts in my wallet from back when I was messing with SD months and months ago because somewhere along the lines someone cheated them out of a payout and it broke the whole chain of transactions so that none could confirm.

Basically once you see if you lost or not, you try to get a replacement transaction mined that displaces your losing bet.  With enough hash power or control over txns of enough hash power, it'd be trivial.
List this many ...
legendary
Activity: 2730
Merit: 1034
Needs more jiggawatts
Basically once you see if you lost or not, you try to get a replacement transaction mined that displaces your losing bet.  With enough hash power or control over txns of enough hash power, it'd be trivial.

You can mine the block displacing your bet first. After you successfully create the block, hold it back. Now you place some bets and get a winning payout from SD. Finally you release the block to the bitcoin network, getting your bet back as well. (= Finney attack)

Sometimes your block will be orphaned and your bet will be lost. But in most cases you get the winnings (if any) AND your bet. An orphaned block also means losing 50 BTC (soon 25), and holding a block back increases the chance of that happening. But if each of your blocks contain bets for hundreds of bitcoins then the profits should far outweigh those occasional losses.

I don't see any way for GBT to fix this. Services just have to stop accepting payments at zero confirmations.
legendary
Activity: 1223
Merit: 1006
I think it is more for the 0-1 confirmation double spends.  I don't think that there is currently much risk of a 6 confirmation double spend from any single pool.  However, any pool could attack something like satoshidice or other 0 conf service with minor luck.

Less than 6 confirmations should involve only very small sums. I think in that case it would be good enough to detect after the fact. So a mining pool may be able to help someone pull off a 1 bitcoin "heist". But it would have huge consequences for the pool afterwards.

In the case of 0 conf it may be necessary to have some miners detecting it as it happens. I'm not sure if there is any evidence otherwise. But what if you spread 2 conflicting transactions to different parts of the bitcoin network. When a pool creates a block with one of those transactions it may look like a Finney attack, while the pool is really just including the transaction it sees. Anyone can do this to make a pool look evil, so detecting something like that doesn't actually mean anything.

Is SatoshiDice really 0 conf? So if you have some hashpower you can easily defraud them of large sums using a Finney attack?

No, you can make bets and try to undo them and spend the money some other way ... after you learn if they lose.
So yeah - NO one is gonna make much money out of that.

Actually, if you look at the SD txns, you'll see that many people pull it off all the time.  I still have zero confirmation payouts in my wallet from back when I was messing with SD months and months ago because somewhere along the lines someone cheated them out of a payout and it broke the whole chain of transactions so that none could confirm.

Basically once you see if you lost or not, you try to get a replacement transaction mined that displaces your losing bet.  With enough hash power or control over txns of enough hash power, it'd be trivial.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
I think it is more for the 0-1 confirmation double spends.  I don't think that there is currently much risk of a 6 confirmation double spend from any single pool.  However, any pool could attack something like satoshidice or other 0 conf service with minor luck.

Less than 6 confirmations should involve only very small sums. I think in that case it would be good enough to detect after the fact. So a mining pool may be able to help someone pull off a 1 bitcoin "heist". But it would have huge consequences for the pool afterwards.

In the case of 0 conf it may be necessary to have some miners detecting it as it happens. I'm not sure if there is any evidence otherwise. But what if you spread 2 conflicting transactions to different parts of the bitcoin network. When a pool creates a block with one of those transactions it may look like a Finney attack, while the pool is really just including the transaction it sees. Anyone can do this to make a pool look evil, so detecting something like that doesn't actually mean anything.

Is SatoshiDice really 0 conf? So if you have some hashpower you can easily defraud them of large sums using a Finney attack?

No, you can make bets and try to undo them and spend the money some other way ... after you learn if they lose.
So yeah - NO one is gonna make much money out of that.
legendary
Activity: 2730
Merit: 1034
Needs more jiggawatts
I think it is more for the 0-1 confirmation double spends.  I don't think that there is currently much risk of a 6 confirmation double spend from any single pool.  However, any pool could attack something like satoshidice or other 0 conf service with minor luck.

Less than 6 confirmations should involve only very small sums. I think in that case it would be good enough to detect after the fact. So a mining pool may be able to help someone pull off a 1 bitcoin "heist". But it would have huge consequences for the pool afterwards.

In the case of 0 conf it may be necessary to have some miners detecting it as it happens. I'm not sure if there is any evidence otherwise. But what if you spread 2 conflicting transactions to different parts of the bitcoin network. When a pool creates a block with one of those transactions it may look like a Finney attack, while the pool is really just including the transaction it sees. Anyone can do this to make a pool look evil, so detecting something like that doesn't actually mean anything.

Is SatoshiDice really 0 conf? So if you have some hashpower you can easily defraud them of large sums using a Finney attack?
legendary
Activity: 1223
Merit: 1006
What harm can a pool op do when miners don't see the transactions until after a block is created? What exactly would a "dirty job" be?
Double spending, for example.

I think it is more for the 0-1 confirmation double spends.  I don't think that there is currently much risk of a 6 confirmation double spend from any single pool.  However, any pool could attack something like satoshidice or other 0 conf service with minor luck.
legendary
Activity: 2730
Merit: 1034
Needs more jiggawatts
What harm can a pool op do when miners don't see the transactions until after a block is created? What exactly would a "dirty job" be?
Double spending, for example.

I always thought about defenses against this in the direction of deciding which fork to build on. So if you have your own bitcoind or public service that you trust, you can refuse to help a pool build more than 2 blocks on a fork parallel to the "main chain" as defined by your trusted source. This should make 6-confirmation double spends practically impossible if a majority of miners followed this scheme, I think?

To implement this you only need a trusted source that you can query for the current height and the block hash at height X. It should work even with getwork as you only need the "prevblockhash" from the block you are mining. A complication is getting the prevblockhash from a block your trusted source doesn't know about (yet).

I guess you could also do this by remembering transactions from before a chain reorganization and refuse to include transactions on the new fork that send those coins somewhere else. You'd probably have to force those transactions into the next few blocks.

You could try to do both. Don't be part of a 51% attack, but if one does happen try to get the orphaned transactions into the new fork as quickly as possible. But chances are that when the chain reorg takes place there is a double spend in the new fork already, if the attacker did his job properly.

You could also use a trusted source to see that a transaction you are currently including in a block you are mining is actuallly a double spend of a transaction from a different fork. But in this case it seems easier to just refuse to deviate much from the main chain.
legendary
Activity: 2576
Merit: 1186
What harm can a pool op do when miners don't see the transactions until after a block is created? What exactly would a "dirty job" be?
Double spending, for example.
legendary
Activity: 2730
Merit: 1034
Needs more jiggawatts
I think GBT would run pretty smoothly if it was allowed to skip the transactions.

What harm can a pool op do when miners don't see the transactions until after a block is created? What exactly would a "dirty job" be?

Filter out transactions? I think miners seeing this a little later is good enough.

Including transactions that should be filtered out? That assumes we should be blacklisting someone. But who/what... stolen coins?
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Pushed a fix for Eloipool and deployed on Eligius:
Code:
2012-11-05 15:57:27 BFGMiner              NOTICE  Stratum from pool 3 detected new block
2012-11-05 15:57:38 newBlockNotification  INFO    Received new block notification
2012-11-05 15:57:40 merkleMaker           INFO    New block: 0000000000000052ff0915fef8c1144e4a16fe380cf416c991e96a230871512c (height: 206605; bits: 1a0513c5)
2012-11-05 15:57:40 BFGMiner              NOTICE  LONGPOLL from pool 0 caught up to new block
2012-11-05 15:57:45 JSONRPCServer         INFO    Longpoll woke up 7704 clients in 5.137 seconds
Now if only slush would share his secret to getting blocks sooner :p
Ah, so you are saying that Stratum is magically predetermining when blocks are going to arrive to cover for why you see it later.
Well with that suggestion, who would want to use any other protocol Tongue

Also interesting that you have stated that for the sake of the poor performance of LP in GBT you are sending out no transactions with the LP ... yet another poor implementation feature (using LP the same way getwork uses it, is also poor implementation)
I don't know how may times I've thought and said that Tycho must have been drunk the night he thought up LP ...

Also the implemented GBT LP means a % increase in transaction confirmation times when compared to Stratum LP ... and getwork LP ...

So the only supposed performance improvement over Getwork so far is that you can hash a 100TH/s device for 30 seconds but with getwork it will only support a 2TH/s device for 30 seconds ...
(though getwork wins hands down for the amount of data transferred)

Who designed this ... they should be shot.
legendary
Activity: 2576
Merit: 1186
Pushed a fix for Eloipool and deployed on Eligius:
Code:
2012-11-05 15:57:27 BFGMiner              NOTICE  Stratum from pool 3 detected new block
2012-11-05 15:57:38 newBlockNotification  INFO    Received new block notification
2012-11-05 15:57:40 merkleMaker           INFO    New block: 0000000000000052ff0915fef8c1144e4a16fe380cf416c991e96a230871512c (height: 206605; bits: 1a0513c5)
2012-11-05 15:57:40 BFGMiner              NOTICE  LONGPOLL from pool 0 caught up to new block
2012-11-05 15:57:45 JSONRPCServer         INFO    Longpoll woke up 7704 clients in 5.137 seconds
Now if only slush would share his secret to getting blocks sooner :p
legendary
Activity: 2576
Merit: 1186
Almost implemented the first working version of GBT within cgminer.

This sort of behaviour tells the story well though:

Code:
 [2012-11-05 22:28:08] Accepted 23c0ba9a Diff 7/1 GPU 2 pool 2
 [2012-11-05 22:28:08] Accepted 163ec800 Diff 11/1 GPU 0 pool 2
 [2012-11-05 22:28:13] Stratum from pool 3 detected new block
 [2012-11-05 22:28:13] Accepted 61bdf4a7 Diff 2/1 GPU 3 pool 2
 [2012-11-05 22:28:13] Stale share detected, submitting as user requested
 [2012-11-05 22:28:14] Accepted 00feebca Diff 257/1 GPU 1 pool 2
 [2012-11-05 22:28:19] Rejected 70b76eb2 Diff 2/1 GPU 1 pool 2 (stale-prevblk)
 [2012-11-05 22:28:19] Rejected 47b723ef Diff 3/1 GPU 3 pool 2 (stale-prevblk)
 [2012-11-05 22:28:20] Rejected 9644eef7 Diff 1/1 GPU 2 pool 2 (stale-prevblk)
 [2012-11-05 22:28:21] Rejected 4cafa99d Diff 3/1 GPU 1 pool 2 (stale-prevblk)
 [2012-11-05 22:28:22] Rejected 093dc4cd Diff 27/1 GPU 0 pool 2 (stale-prevblk)
 [2012-11-05 22:28:23] GBT LONGPOLL from pool 2 requested work restart
 [2012-11-05 22:28:24] Accepted 375a5178 Diff 4/1 GPU 1 pool 2
 [2012-11-05 22:28:25] Accepted 047206f4 Diff 57/1 GPU 3 pool 2
 [2012-11-05 22:28:25] GBT LONGPOLL from pool 2 requested work restart
 [2012-11-05 22:28:30] Accepted 72bee5f8 Diff 2/1 GPU 0 pool 2
 [2012-11-05 22:28:31] Accepted 8e63856d Diff 1/1 GPU 2 pool 2

Now pool 3 here is slush on stratum (ping time 335ms), while pool 2 is EMC on GBT (ping time 225ms).

Note how stratum picks up the block change and notifies me 10 seconds before GBT does. However, this is not the GBT pool simply learning of the block change later, because it starts rejecting my shares as being from the previous block before I even got the longpoll from the GBT pool. Then of course there's a second longpoll with the transactions, which is not an unusual practice.
Confirming this analysis as accurate... for now. It shouldn't be this much of a difference though, so I suspect there's something going on server-side I need to look into.
Analysis of block 000000000000017eeb9f83b6037b12c26c41abf776c3602b619b0b583ec74b7e:
Code:
1143.46298 Stratum notification of block from slush
1151.00000 Eligius bitcoind finishes processing new block
1152.00000 Eligius eloipool finishes processing new block
1155.90269 Begin receiving GBT longpoll reply from EclipseMC
1156.62722 Begin receiving GBT longpoll reply from Eligius
1162.99977 Finish receiving GBT longpoll reply from Eligius: 822 kB
1167.77308 Finish receiving GBT longpoll reply from EclipseMC: 804 kB
So somehow, slush's pool is getting blocks 8 seconds earlier than EclipseMC and Eligius. That's half the time difference alone.
Now, Eloipool isn't supposed to be including transactions in the first longpoll response (that's what the second one is for). 804/822 kB reflects that in practice it is. That's a bug for me to fix Smiley
There's also a 4 second delay between eloipool processing the block and my beginning to receive the longpoll. The entire process of queuing the LP data to clients is taking 5.5 seconds on Eligius during new blocks. When there isn't a new block, the same thing takes 2.7 seconds. Presumably fixing the size of the longpoll response should reduce this somewhat, but this is an area where Stratum actually does make a difference: the server would only need to encode the JSON response once and queue it at all clients identically.

legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
I guess this could be improved by having the client include something in its capabilities list to tell the server it wishes to mine "blindly" without knowing about transactions. If both client and server has this capability then the server sends a merkle branch instead of a list of transactions.

What do you think, Luke-Jr and others? It could get bandwidth usage down for users who don't care to see the transactions.
This specific method is worse than Stratum's new alternative, since the pool would now know which miners it can safely give dirty jobs to.
I also don't think it's a good idea to encourage further pool centralization by making it any easier for miners to be neglegent.
i.e. pools won't want to use GBT since it forces them to always send a TRUCKLOAD of extra data per work item even if the miners don't want it.

So why would any pool use it over Stratum?

... and where are these miners that even show ANYTHING of this TRUCKLOAD ...

Stratum gives the miner the option to allow the dump truck to back up to the house and dump a few hundred K of data that not a single miner reports anything about ... or the miner can not ask for it.
GBT send the dump truck every time, that not a single miner reports anything about the contents - it just uses it to build the merkle tree bigger than what Stratum sends (Stratum only needs to send the side of the tree)

Yes it is a TRUCKLOAD ... and the API stats in cgminer will report this once I get in there and add it after GBT goes live in cgminer ...

You'll be able to load balance 3 pools : Getwork LP, Stratum and GBT and look at how different the numbers are Cheesy Cheesy Cheesy Cheesy

Yeah my post before proving that already no doubt got ignored - but anyone using cgminer will be able to see it soon enough themselves.

This is really just as bad as that idiot subSTRATA.
You must accept a TRUCKLOAD of txns - coz nobody will be safe without them.
Even if no program ever tell you anything about them ...
... and even if anyone can write a program to report exactly what ANY pool is doing ...
... though I guess that may be a bit far beyond your skill set to even understand how to do that.

---

Meanwhile ... there was an accusation thrown at me last week that I ignored but I guess I should clear it up.

Yes my description of Luke-Jr's hiding information was saying he was hiding it for longer than he really did.
Even wizkid mentioned it to me in an IRC discussion that I missed ... somehow when I wrote the post.
(Heh Mobius if you read this ... just keep swimming ... Cheesy)
Yes no one needs to believe me but I did actually completely miss that in the IRC discussion Tongue

But the reality is that he has been hiding information since 1-Jun about how his pool has been "not processing transactions" but I also quoted the information that he did indeed post in his thread (I almost never go near his thread, though once when I posted there then deleted the posts after he had quoted them so I'd not ever have to go back ... one is linked to below)
Also with the dice reject code I'd also be curious if wizkid would be so kind as to show us how it does reject SD transactions ...

Firstly yes Jr did post in his thread about the 32 txn limit he imposed on his pool for over 5 months back when he did it and was still there when wizkid took over:
https://bitcointalksearch.org/topic/m.968819

However, the actual post I thought about was this one:
https://bitcointalksearch.org/topic/m.934320
Where Jr says:
Quote
...
And it's not a matter of "ignoring" transactions, it's a matter of not processing them.
...
Eligius continues to employ aggressive anti-spam checks on feeless transactions, to avoid wasting charity on spammers. Because of the nature of spam detection, it is necessary to keep the algorithms used confidential
...
These algorithms have never been disclosed - which is really what I was referring to.
Yes I made a mistake Tongue
The fact that it "doesn't process" transactions based on his definition of the word "feeless" (coz his definitions is not fee=0)

However, for the point of view of giving out a high level, means next to nothing, information ... yes he said
Quote
FWIW, all pools have been experiencing higher stales and orphaned blocks due to the excessive transaction volume lately resulting from SatoshiDice abusing the blockchain (there are much cleaner ways to do the same thing). After our second set of 3 orphans in a row, I'm a bit on the annoyed end. For now, I am blocking transactions to 1dice* addresses and limiting our blocks to 32 transactions until we've caught up on the extra credit or at least have a viable alternative solution. I really hate to do this, as Eligius has traditionally been one of the most accepting mining pools, so any suggestions on other possibilities would be most welcome.
Which is using the fact that Eligius was worse off than every other pool (none of the others changed to 32 txns per block) and none of the others had as many orphans as Eligius - so yeah blaming SD for his crappy software and using it as an excuse to reduce the BTC transaction confirm time for over 5 months really shows something about him ...
legendary
Activity: 2576
Merit: 1186
Almost implemented the first working version of GBT within cgminer.

This sort of behaviour tells the story well though:

Code:
 [2012-11-05 22:28:08] Accepted 23c0ba9a Diff 7/1 GPU 2 pool 2
 [2012-11-05 22:28:08] Accepted 163ec800 Diff 11/1 GPU 0 pool 2
 [2012-11-05 22:28:13] Stratum from pool 3 detected new block
 [2012-11-05 22:28:13] Accepted 61bdf4a7 Diff 2/1 GPU 3 pool 2
 [2012-11-05 22:28:13] Stale share detected, submitting as user requested
 [2012-11-05 22:28:14] Accepted 00feebca Diff 257/1 GPU 1 pool 2
 [2012-11-05 22:28:19] Rejected 70b76eb2 Diff 2/1 GPU 1 pool 2 (stale-prevblk)
 [2012-11-05 22:28:19] Rejected 47b723ef Diff 3/1 GPU 3 pool 2 (stale-prevblk)
 [2012-11-05 22:28:20] Rejected 9644eef7 Diff 1/1 GPU 2 pool 2 (stale-prevblk)
 [2012-11-05 22:28:21] Rejected 4cafa99d Diff 3/1 GPU 1 pool 2 (stale-prevblk)
 [2012-11-05 22:28:22] Rejected 093dc4cd Diff 27/1 GPU 0 pool 2 (stale-prevblk)
 [2012-11-05 22:28:23] GBT LONGPOLL from pool 2 requested work restart
 [2012-11-05 22:28:24] Accepted 375a5178 Diff 4/1 GPU 1 pool 2
 [2012-11-05 22:28:25] Accepted 047206f4 Diff 57/1 GPU 3 pool 2
 [2012-11-05 22:28:25] GBT LONGPOLL from pool 2 requested work restart
 [2012-11-05 22:28:30] Accepted 72bee5f8 Diff 2/1 GPU 0 pool 2
 [2012-11-05 22:28:31] Accepted 8e63856d Diff 1/1 GPU 2 pool 2

Now pool 3 here is slush on stratum (ping time 335ms), while pool 2 is EMC on GBT (ping time 225ms).

Note how stratum picks up the block change and notifies me 10 seconds before GBT does. However, this is not the GBT pool simply learning of the block change later, because it starts rejecting my shares as being from the previous block before I even got the longpoll from the GBT pool. Then of course there's a second longpoll with the transactions, which is not an unusual practice.
Confirming this analysis as accurate... for now. It shouldn't be this much of a difference though, so I suspect there's something going on server-side I need to look into.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Almost implemented the first working version of GBT within cgminer.

This sort of behaviour tells the story well though:

Code:
 [2012-11-05 22:28:08] Accepted 23c0ba9a Diff 7/1 GPU 2 pool 2
 [2012-11-05 22:28:08] Accepted 163ec800 Diff 11/1 GPU 0 pool 2
 [2012-11-05 22:28:13] Stratum from pool 3 detected new block
 [2012-11-05 22:28:13] Accepted 61bdf4a7 Diff 2/1 GPU 3 pool 2
 [2012-11-05 22:28:13] Stale share detected, submitting as user requested
 [2012-11-05 22:28:14] Accepted 00feebca Diff 257/1 GPU 1 pool 2
 [2012-11-05 22:28:19] Rejected 70b76eb2 Diff 2/1 GPU 1 pool 2 (stale-prevblk)
 [2012-11-05 22:28:19] Rejected 47b723ef Diff 3/1 GPU 3 pool 2 (stale-prevblk)
 [2012-11-05 22:28:20] Rejected 9644eef7 Diff 1/1 GPU 2 pool 2 (stale-prevblk)
 [2012-11-05 22:28:21] Rejected 4cafa99d Diff 3/1 GPU 1 pool 2 (stale-prevblk)
 [2012-11-05 22:28:22] Rejected 093dc4cd Diff 27/1 GPU 0 pool 2 (stale-prevblk)
 [2012-11-05 22:28:23] GBT LONGPOLL from pool 2 requested work restart
 [2012-11-05 22:28:24] Accepted 375a5178 Diff 4/1 GPU 1 pool 2
 [2012-11-05 22:28:25] Accepted 047206f4 Diff 57/1 GPU 3 pool 2
 [2012-11-05 22:28:25] GBT LONGPOLL from pool 2 requested work restart
 [2012-11-05 22:28:30] Accepted 72bee5f8 Diff 2/1 GPU 0 pool 2
 [2012-11-05 22:28:31] Accepted 8e63856d Diff 1/1 GPU 2 pool 2

Now pool 3 here is slush on stratum (ping time 335ms), while pool 2 is EMC on GBT (ping time 225ms).

Note how stratum picks up the block change and notifies me 10 seconds before GBT does. However, this is not the GBT pool simply learning of the block change later, because it starts rejecting my shares as being from the previous block before I even got the longpoll from the GBT pool. Then of course there's a second longpoll with the transactions, which is not an unusual practice.

-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/

I apologize, I missed that in all the noise.

I support your idea. Not sure if "suggested merkle root" is a typo. I'd suggest that on the wire this is a merkle branch that when combined with your generated coinbase tx gives you the merkle root for the block. You request work once, then send several times block header + coinbase. The server will know which txes / merkle tree this belongs to by looking at your work ID.

I believe you are right that the majority will not care to see or change the transactions, so it is a good optimization for the common case. Also, until pools support changing the transactions, it will be a good optimization in every case.


Thank you very much. Yes it was the concept rather than the details, and branch is a better term. I'm thinking the mining software can support both modes and be started in confirm transaction mode only if desired by the user.

EDIT: or alternatively, randomly inject requests for full transactions.
legendary
Activity: 2730
Merit: 1034
Needs more jiggawatts
This specific method is worse than Stratum's new alternative, since the pool would now know which miners it can safely give dirty jobs to.

Ok, I didn't think of that. But really, what is the worst that can happen? Mining 5 txes instead of 500? You can see that after-the-fact and I don't think that's a disaster.

Or maybe I am missing something that "dirty" jobs could contain?
legendary
Activity: 2576
Merit: 1186
I guess this could be improved by having the client include something in its capabilities list to tell the server it wishes to mine "blindly" without knowing about transactions. If both client and server has this capability then the server sends a merkle branch instead of a list of transactions.

What do you think, Luke-Jr and others? It could get bandwidth usage down for users who don't care to see the transactions.
This specific method is worse than Stratum's new alternative, since the pool would now know which miners it can safely give dirty jobs to.
I also don't think it's a good idea to encourage further pool centralization by making it any easier for miners to be neglegent.
legendary
Activity: 2730
Merit: 1034
Needs more jiggawatts

I apologize, I missed that in all the noise.

I support your idea. Not sure if "suggested merkle root" is a typo. I'd suggest that on the wire this is a merkle branch that when combined with your generated coinbase tx gives you the merkle root for the block. You request work once, then send several times block header + coinbase. The server will know which txes / merkle tree this belongs to by looking at your work ID.

I believe you are right that the majority will not care to see or change the transactions, so it is a good optimization for the common case. Also, until pools support changing the transactions, it will be a good optimization in every case.
Pages:
Jump to: