Pages:
Author

Topic: Handle much larger MH/s rigs : simply increase the nonce size - page 5. (Read 10078 times)

legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Well I guess I'm gonna have to spend some time looking at these other options coz I see all sorts of issues about having the miner generating blocks (why I call it hack): like having to pass merkle trees, or running a memory pool (that even bitcoind doesn't get that perfectly right still), or passing some encrypted protocol about reuse of data already passed back and forward and thus keeping track of that also, stuff like increasing the network load (not decreasing it) due to passing all this extra information, passing txn's back and forward and handing LP's and orphans, and all sorts of other issues that every miner program is now going to have to deal with: simply to get the miner to generate the coinbase txn i.e. do the work of the pool/bitcoind, rather than the pool/bitcoind doing it (and also rather than fixing the actual problem that the nonce will be too small when ASIC truly lands - next year possibly?)

Again, increasing the nonce size is quite simply elegant and also the actual correct solution based on the design.

This argument about a hard fork seems to be to find a way around the nonce solution, due to some deigned consideration that hard forks are not possible to be done ... before the problem occurs ... which is also why I bring this up now and not later when it becomes a problem.

This is of course normal in any programming environment, to on occasion see things in advance, and yet to not bite the bullet and make the required change that some may consider difficult until there's no time to do it and everyone can clearly see the problem because it has already presented itself. So instead to make a complex work around that is open to all sorts of issues - the biggest one in this case will be having every miner program implementing code from bitcoind at whatever level required (non trivial) and more to deal with passing this information around.

As for hard vs soft ... well in April there were multiple >3 block forks due to the issues of updating and multiple release candidates - so really no worse than a hard fork IMO in the number of people hashing on bad forks regularly back then.
That was of course caused by someone putting a poisoned transaction into the network to cause the exact problem that happened and as a result it was extremely similar to a hard fork.

Edit: heh, lets see what eleuthria's protocol has to say when he defines it ...

Edit2: while you're at it - kill that retarded design of LP - keeping an idle socket connection open sometimes for more than an hour ... wow deepbit/tycho must have had a bad hangover when they came up with that idea
legendary
Activity: 1750
Merit: 1007
I'm working with a few others on a draft to revise the way the mining protocol works.  Current outlook is very good, in that it should be able to support 256 TH/s -per device-, while almost eliminating all network traffic (total "getwork" to be downloaded by the miner is ~1 KB -total- between each longpoll).  Additionally, the 256 TH/s per device is a limit that can be readily increased in *2^8 increments.

More information coming soon.  The protocol design will require pools to be redesigned if they want to adapt to the changing landscape that ASICs may bring (I still don't think we'll see BFL's claimed specs).  However, this protocol would "future proof" the pooled mining design.

For the miner to utilize the protocol, they would either need mining software with direct support, or a local proxy which interprets the new protocol and translates it for older miners.  In the coming weeks I will hopefully be able to post a complete spec for mining software developers to consider implementing it.  Hopefully a proof-of-concept pool server will be available in the next 2 months.

This will require -no- change to Bitcoin's current protocols.  It is purely a change in the way pools interact with miners.
legendary
Activity: 1072
Merit: 1181
Again, the solution others are saying is to move block generation and txn selection to the miner software - that seems indeed like a hack to me.
A hack to solve a problem with a very clear and specific solution.

It is only a hack when considered from the viewpoint of the current infrastructure. There is no reason why you'd leave the hashing of the merkle root on the server instead of the client, especially as it is exactly the same operation (double SHA256).

Quote
The issue is - a fork.
Well, the equivalent of a fork was done in April (and yeah wasn't done very well) and that was for a much lesser reason.

As fas as I know, there has not been a single hard fork in Bitcoin's history. The changes for BIP16 and BIP30 were "soft forks", that only made a backward-compatible change (meaning only making some existing rules more strict). Even the much more severe bug fixes in juli-august 2010 (see CVEs) were in fact only soft forks. Soft forks are safe as soon as a majority of mining power enforces them.

Changing the serialization format of blocks or transactions, or introducing a new version for these, however does require a hard fork. Other changes that require a hard fork are changing the maximum block size, changing the precision of transaction amounts, or changing the mining subsidy function. All these need a much much higher level of consensus, as these require basically the entire Bitcoin network to upgrade (not just a majority, and not just miners). Everyone using an old client after the switch will get stuck in a sidechain with old miners (even if that is just 1% of the hashing power). If we ever do a hard fork (and we may need to), it will have to be planned, implemented and agreed upon years in advance.

Quote
Hmm, who's in control here ...

You are. Bitcoin is based on consensus, but you can only reach consensus as long as you can convince enough people there is a problem, and I personally still see this is a minor implementation inconvenience rather than a problem that will limit our growth.
sr. member
Activity: 389
Merit: 250
There's already an "extraNonce" field that's used in the coinbase transaction, incrementing it changes the merkle root and gives you a new batch of hashes to work on. From a recently generated coinbase, this value is 4294967295.

Maybe what we need is to update the way work is fetched to allow more efficient processing. Since the other block header fields don't change much - version, prev hash, merkle root, timestamp (with the exception of rollntime), and target. Why not just package get works to include one copy of those fields and 4-100 merkle roots (depending on client speed).

As far as generating the merkle root en masse, my laptop with very average performance could do ~1.8MH/s back when that mattered, which is about 4 million sha256() rounds in a second. I'm not intimately familiar with how the merkle roots are generated (n*log(n) hashes for n transactions?) but if we call it 400 transactions we get a bit south of 4000 hashes per merkle root and with heavy rounding and lots of assumptions every step of the way gets about one thousand merkle roots generated per second on an economy two-core laptop.
Checking slush's pool shows 430 getworks/s to handle 1.2GH/s of mining power, which could be handled twice over on much less hardware than a decent server (granted the server also has to do things like handle the miners connecting and running whatever backend is required). If you want to make sure you have enough capacity I'm sure you could use a GPU to do this and get easily another order of magnitude above a CPU.

For a dedicated miner running 1TH/s you would need to supply 250 merkle roots/s, but if you're running 1TH/s you could probably afford to mine solo (at least for current difficulty, and probably for a good while). Even on a pool, a condensed request like the one above would be under 10KByte/s (56Kbit dial-up is just slightly slower).

Will pools be affected by ever-climbing hash rates? Sure. Will it matter in the long run? I doubt it. Will it require a fork or protocol change. Almost certainly not (the logistics of changing something this low-level in the blockchain would be a nightmare).
Maybe you could ask one of the large pool operators slush or giga both come to mind off hand (and I know giga is already thinking about upgrading to ASIC and the changes that requires).
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Meanwhile ... why does the nonce exist?
From my understanding this is the reason, however, it will be too small if the network grows due to ASIC.

Because of the nonce, we only need to recalculate the merkle root once every 4 billion hashes, instead of for every hash. In the current infrastructure, that merkle root is calculated on the server (typically) while the hashes are calculated by the miner. This means that there is an interaction between server and miner every 4 billion hashes. But the actual calculation per merkle root is nothing compared to the 4 billion hashes (see my post above) the miner already does. If the requesting of work becomes the bottleneck, the work generation can simply be moved to the miner.

No, this is not an issue. No, there is no need to increase the nonce size. Yes, 64 bit nonces would have things slightly easier for us, but all is required is a slightly more complex mining infrastructure and this inconvenience is nothing compared to doing a hard fork.

i.e. the correct solution based on the bitcoin spec is indeed to increase the nonce size - and as I mentioned later, if it's increased it may as well be 3 x 32bits ... or even 4 for a nice round number ... though I doubt 3 would run out for at least many ... decades? Smiley

Again, the solution others are saying is to move block generation and txn selection to the miner software - that seems indeed like a hack to me.
A hack to solve a problem with a very clear and specific solution.

The issue is - a fork.
Well, the equivalent of a fork was done in April (and yeah wasn't done very well) and that was for a much lesser reason.

These ASICs don't exist yet and seriously, that seems to be the only real argument anyone has against doing it properly is that BFL have announced their ASICs with a time frame.
Their last related announcement on a similar subject was last September which proved to be an over spec announcement made using a simulation that didn't even exist that they delivered more than 3 months late.

Hmm, who's in control here ...
legendary
Activity: 1072
Merit: 1181
Meanwhile ... why does the nonce exist?
From my understanding this is the reason, however, it will be too small if the network grows due to ASIC.

Because of the nonce, we only need to recalculate the merkle root once every 4 billion hashes, instead of for every hash. In the current infrastructure, that merkle root is calculated on the server (typically) while the hashes are calculated by the miner. This means that there is an interaction between server and miner every 4 billion hashes. But the actual calculation per merkle root is nothing compared to the 4 billion hashes (see my post above) the miner already does. If the requesting of work becomes the bottleneck, the work generation can simply be moved to the miner.

No, this is not an issue. No, there is no need to increase the nonce size. Yes, 64 bit nonces would have things slightly easier for us, but all is required is a slightly more complex mining infrastructure and this inconvenience is nothing compared to doing a hard fork.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Well ... 2 minute block times also reduces block size almost 5 times on average ...
but that idea got chucked out when I brought that up last year Smiley
https://bitcointalksearch.org/topic/suggested-major-change-to-bitcoin-51504

Meanwhile ... why does the nonce exist?
From my understanding this is the reason, however, it will be too small if the network grows due to ASIC.
legendary
Activity: 1072
Merit: 1181
Let us look at this from a theoretical point of view, rather than from what current infrastructure provides. The first pool that is still operational started in december 2010 - two years ago. By the time we can pull of a block format change, we are at least two years away anyway. At that time, I'm sure the Bitcoin infrastructure will be very different from now.

Assume blocks reach their maximum size: 1000000 bytes, all the time. The smallest typical transactions are 227 bytes (1 input from a compressed pubkey, 2 outputs to addresses). That means a maximum of 4405 transactions.

In a merkle tree with 4405 elements, the leaves are 13 levels deep. That means generation of a new piece of work (for 4 billion double-SHA256 operations worth of calculation), you need to increase the extranonce in the first transaction (the coinbase) 's output, and hash your way up to the merkle root. This requires 13 double-SHA256 operations. If this is offloaded to the GPU's/FPGA's/ASIC's/QuantumCPU's/... that are *already* doing precisely that hashing operation for the block header anyway, they get a 0.0000003% overhead. The only thing they'd require is an occasional (at 4405 transactions per 10 minutes, 7 times per second) update of the list of transactions. A phone connection with a modem suffices for that kind of traffic.

In short: nothing to worry about. The only problem is with the current infrastructure, which will evolve.
sr. member
Activity: 455
Merit: 250
You Don't Bitcoin 'till You Mint Coin
to the OP:
I think the idea is very reasonable.
For it to work, we need to have a transition period where the block with a nounce size of 32 bits still works a long side with a block and a nounce size of say 64 or whatever. The 32 bit nounce could be phased out after 4 to 8 years. plenty of time for hardware to naturally phase out anyways. Seems doable. Hope you write a detailed BIP.
kjj
legendary
Activity: 1302
Merit: 1026
You could always add a new block version number to exist alongside the current block version.  There is no need for a hard fork, nor to break everything in one day.

Doing that requires a hard fork, as it means some blocks will be valid to new nodes but not to old. The first block mined in the new system will kick out every old node permanently on a sidechain without new-version blocks.

Ahh, duh.  My bad, it would require a fork.

But not a fork that would break everything.  Non-upgradable miners could keep making the blocks that they know how to make, while their control nodes would accept blocks from the network that were at the new version.
legendary
Activity: 1072
Merit: 1181
You could always add a new block version number to exist alongside the current block version.  There is no need for a hard fork, nor to break everything in one day.

Doing that requires a hard fork, as it means some blocks will be valid to new nodes but not to old. The first block mined in the new system will kick out every old node permanently on a sidechain without new-version blocks.
kjj
legendary
Activity: 1302
Merit: 1026
You could always add a new block version number to exist alongside the current block version.  There is no need for a hard fork, nor to break everything in one day.
legendary
Activity: 2730
Merit: 1034
Needs more jiggawatts
rollntime will do just fine. If you don't want to roll even 1 second into the future and you need 3 nonce ranges per second, just get 3 sets of rollable work data. Problem solved. Also, being a few seconds into the future (or the past) is ok.

The other option besides rollntime is generating work locally and using getmemorypool (BIP22 as Pieter Wuille already mentioned) instead of getwork.

There's no need to change the format of bitcoin blocks.
legendary
Activity: 1260
Merit: 1000
What is Poolserverj doing that allows it to scale?  You mean the local work generation? 

I don't think PSJ can scale all that well in it's current form, but I could be wrong.  From when I investigated it (and ultimately rejected it as a back end), it had some serious limitations.  It may have improved since then, but it was my understanding the developer had abandoned it and it has not progressed in a very long time?

legendary
Activity: 1526
Merit: 1134
If you look at what poolserverj is doing, it's clear that centralized pools can scale up more or less indefinitely. No protocol changes are needed and thus none will occur.
legendary
Activity: 1072
Merit: 1181
"A normal CPU can generate work for several TH/s easily when implemented efficiently"
However, if we are talking more than an order of magnitude - then this statement is very questionable.

That is one single CPU. If work generation becomes to heavy for a CPU, do it on two. If that becomes too much, do it on a GPU. By the time a GPU can't do work generation for your ASIC cluster behind it, it will be economically viable to move the work generation to an ASIC.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
...
However, you still haven't convinced me there is a problem. The current getwork()-based mining requires new work every 4 billion hashes, yes. But when combined with local work generation, or even direct mining on top of getmemorypool(), there is no performance problem at all. A normal CPU can generate work for several TH/s easily when implemented efficiently. I believe a few pools already use this.

Unless it becomes clear that there is an inherent problem with the current system that will limit mining operation in the future (as opposed to implementation issues because of the current state of operations), I see no reason at all for a hard fork.
Well the ASIC 'discussions' at the moment are suggesting 1TH/s devices ... these being the devices that are 10x what most people would normally use.

Looking at current technology, we have ~200 - ~800 MH/s devices in GPU and FPGA (of course there are lower also and a few slightly higher)
And looking around pools it is common to find users who have ~2 - ~5 GH/s spending a few thousand dollars on hardware.

gigavps received 4 FPGA rigs in the past couple of days that hash at around 25GH/s each - an order or magnitude in performance and cost

So if this ASIC performance change reality is even close to what is being suggested - an order of magnitude is expected and people like gigvps will be running multiple single devices that are in the hundreds of GH/s

Now this already puts a question mark over the statement:
"A normal CPU can generate work for several TH/s easily when implemented efficiently"

However, if we are talking more than an order of magnitude - then this statement is very questionable.

The point in all this is not that some people will be running faster hardware so yeah it may be an issue for them, it's that the whole network will be (at least) an order of magnitude faster in hashing since those who do not take up the new hardware will be gone due to the difficulty change being in the 10's of millions instead of 1's of millions and the cost to mine with current hardware prohibitive compared to the return.

The other solutions to this are saying that bitcoind code should be moved to the miner - decisions about how to construct a block header and what information to put into it.
i.e. a performance issue is solved by moving code out of bitcoind and into the end mining software.
Currently pools already do this anyway (but not to the miner) to make their own decisions about what txn's to include and how to construct the coinbase, but I see the idea that the miner code itself should take on this a very poor design solution ... almost a hack.

On top of all this is the network requirement increase on the current code base.
And order of magnitude higher I think is really not an option.
legendary
Activity: 2128
Merit: 1073
kano, here's the way to strenghten your arguments.

I believe you are a programmer and actual co-developer of the mining program written in C++. Please run a test showing how many block headers per second can be generated on a contemporary CPU when the additional "nonce" space is actually put in the "coinbase" field.

ASIC mining chips will probably stay the same as long as the position of the 32-bit nonce field doesn't change within the block header. It makes sense to implement only the innermost loop in the hardware, any outer loops should be implemented in software that's driving the hashing hardware.
hero member
Activity: 700
Merit: 507
...
Adding another field is unacceptable since it will break ASIC miners.
Name one ASIC miner ...
BFL SC, OpenASIC initiative, Vlad's something, Reclaimer.
Not to mention old Artforz's 350nm ASICs.
Yeah I meant one that exists.

If bitcoin decisions are made based on unsubstantiated comments by companies like BFL - man are we in deep shit investing in BTC

Might as well announce a ASIC that can contain up to three nonces and is modular enough.. would have the same backing as those of BFL and other companies..

Sorry.. turning down good ideas because some small fpga assembler (havent seen an asic yet!) announces some wonderhasher with non-meetable estimates is not a way to discuss the future of the whole project. What if someone had decided to change the bitcoin algo becaue GPU miners were invented? Should we end up like SolidCoin where important decision are made based upon childish ideologies?
legendary
Activity: 1072
Merit: 1181
Bitcoin is certainly a lot more than just mining, but that doesn't mean the mining business is not part of it. While profitable, economics of scale will always lead to research and development of more efficient (and specialized) hardware. You may argue against the potential for centralization this brings, but from an economic point of view, it is inevitable. A hard fork that would render their investments void is a problem, and it will undermine trust in the Bitcoin system.

Yes, cryptographic primitives get broken and better ones are being developed all the time. I've already said that a security flaw is a very good reason for a hard fork - presumably, few people with an interest in Bitcoin will object against a fix for a fatal security flaw.

However, you still haven't convinced me there is a problem. The current getwork()-based mining requires new work every 4 billion hashes, yes. But when combined with local work generation, or even direct mining on top of getmemorypool(), there is no performance problem at all. A normal CPU can generate work for several TH/s easily when implemented efficiently. I believe a few pools already use this.

Unless it becomes clear that there is an inherent problem with the current system that will limit mining operation in the future (as opposed to implementation issues because of the current state of operations), I see no reason at all for a hard fork.
Pages:
Jump to: