Pages:
Author

Topic: Handle much larger MH/s rigs : simply increase the nonce size - page 4. (Read 10078 times)

legendary
Activity: 905
Merit: 1012
Right now you can use part of the version field as nonce and nothing bad will happen AFAIK.
No, just.... no.
legendary
Activity: 1260
Merit: 1000
The BFL units have made provisions for a larger nonce range in our firmware already, so if/when this happens we should be able to accommodate it without any problems.  The rate we chew threw the nonce has already caused some concern internally, so it's been a design feature almost from the beginning.  Personally, I would like to just see a 64 bit field for the nonce, that would give us a lot of expansion room and it would take a truly monster machine to chew through that in a reasonable time frame.  I don't see monster machines like that happening anytime soon.

Maaku: The nonce range and difficulty are two entirely separate issues.  One is a problem for the hardware one is a problem for the software.  Both need to be addressed (and the software side has been addressed with variable difficulty and GBT/Stratum) - but the hardware issue still remains as Kano has outlined and I think his proposals are the best long term solution so far, everything else is just a hack that is prone to problems and also being overrun as hardware becomes faster.
donator
Activity: 994
Merit: 1000
In this regard, any miner has the power to change the protocol semantics.
Obviously, having consent from all parties involved is better.
If fields are not being used right now, they can obviously be hijacked for another purpose. That shouldn't break the protocol, it's a reinterpretation of the protocol. However, it's bad practice because you never know how many independent projects use the same trick. Then you run into issues when you want to merge functionality.
hero member
Activity: 555
Merit: 654
Messing with the version field (even resizing it) means that old clients will see a bucket full of fuck ...

I can't  follow you line of reasoning.

If you force the Block.nVersion>=2 or higher, old clients will see "garbage", and will do nothing (or do the same as when a 0.6.x version sees a version 2 block, just process it).

Then you can keep the 2 less significant bytes for version, and use the 2 most significant bytes for nonce. Any miner have the power to do it, and nobody can stop them from doing it.

When Bitcoin version 7.1 is out, then the dev team will need to change the code to reflect this change, splitting the nVersion field in two.
If they try to force the 2 MSbytes of nVersion to zero, then they will force a hard fork, because there will be already a block in the chain with nVersion>65536 and because old clients will accept new blocks with nVersion>65536, so they must follow the community and accept the new semantics.

In this regard, any miner has the power to change the protocol semantics.

Obviously, having consent from all parties involved is better.

best regards,
 Sergio.









sr. member
Activity: 389
Merit: 250
No need to change the protocol. Let it be an engineering problem.
Well, it's not a change in the protocol implementation. It's a change in the protocol semantics.

Right now you can use part of the version field as nonce and nothing bad will happen AFAIK.
Even that is still a hard fork. What you have to worry about is "How will an old client see this?" When the answer is "Like something broke" you have a hard fork.

Messing with the version field (even resizing it) means that old clients will see a bucket full of fuck and have no idea what's going on. Particularly when mixing it with a fairly random field like the nonce it starts to see random version numbers on posts. Even leaving the version number the same and shifting the field over to make room in the future (making the two bytes meaningless until the switch) still means you're left shifting the version bytes and now you're seeing version 65536. If you decide to use the first two bytes then there really isn't much change until the abrupt changeover when majority agrees to switch and start using those extra two bytes for something they weren't before.
hero member
Activity: 555
Merit: 654
No need to change the protocol. Let it be an engineering problem.

Well, it's not a change in the protocol implementation. It's a change in the protocol semantics.

Right now you can use part of the version field as nonce and nothing bad will happen AFAIK.
sr. member
Activity: 389
Merit: 250
No need to change the protocol. Let it be an engineering problem.

AFAIK a simple solution is to implement the capability into the mining rig to do merkle tree reorganization. Then if you have 10 transactions to play with you could create 10!=3628800 permutations of the same block. If you have less than 10 transactions, the mining software could create spaceholder transactions with unique hashes.
I've seen blocks come out after a minute with dozens of transactions. As long as the transaction rate stays relatively high I don't see this being a huge issue. Maybe the first set of blocks has fewer permutations to it, but not a huge issue.

As for having mining hardware doing the merkle tree computations. Sticking a simple microprocessor in near the upstream communication would be trivial for anyone making these things, and hopefully it's something they've already thought of (at least the ability to adapt to small changes like the version number being incremented). The old system of fetching and reporting on difficulty 1 shares is pretty dead when ASICs come out, that's no shock. If I remember correctly Eclipse was considering switching to difficulty 10 shares a few weeks ago. It may just also be the case that the old standard for getwork's just won't work anymore either and something closer to the hardware has to manage the merkle root.
donator
Activity: 994
Merit: 1000
No need to change the protocol. Let it be an engineering problem.

AFAIK a simple solution is to implement the capability into the mining rig to do merkle tree reorganization. Then if you have 10 transactions to play with you could create 10!=3628800 permutations of the same block. If you have less than 10 transactions, the mining software could create spaceholder transactions with unique hashes.
hero member
Activity: 555
Merit: 654
There is a possible FIX to add more nonce space while maintaining backwards compatibility:


We can take 16 bits from the block version field to be used as more nonce space:

4    version    uint32_t    Block version information, based upon the software version creating this block

split it as:

2    version    uint16_t    Block version information, based upon the software version creating this block
2    extra_nonce uint16_t    More nonce space


I think it will not break compatibility. Old nodes will only tell the users to upgrade.

What yo you think?
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
The device are simple - hard wired into silicon - they do the hash process and cycle the nonce.

Okey doke.  I thought they had some firmware that knew how to talk over USB (or ethernet or whatever), too.

Of course - you couldn't talk to them without that - should I respond to what you are really saying - or just what you wrote Tongue
Yes I do know what you are doing Tongue

Anyway: it still is simply a ~2 stage hash of 80 bytes with 4 of the bytes cycling from 0 to 0xffffffff and then the 80 bytes being fed into the ~double sha256 - be it 1 2 4 8 or 256+ ~double sha256 streams.

Yes you could put a merkle processor into the silicon and a coinbase construction processor - and it would then have to match what the pools handle ... and with crap like GBT and Stratum spinning around and not being able to decide what to do, what will be next?
That variable target is not worth risking hundreds of thousands of dollars to implement when it could change tomorrow - backward compatibility takes a back seat in bitcoin ...
legendary
Activity: 1526
Merit: 1134
Yes, they do.

If USB1 is too slow to send new work to a 1TH ASIC fast enough then there's a simple solution - don't use USB1. If you're capable of running a rig of that speed you're capable of shoveling it work fast enough. This really isn't a problem the Bitcoin core devs need to solve, the ball is in the ASIC developers court.
legendary
Activity: 1652
Merit: 2301
Chief Scientist
The device are simple - hard wired into silicon - they do the hash process and cycle the nonce.

Okey doke.  I thought they had some firmware that knew how to talk over USB (or ethernet or whatever), too.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
The devices just sit then spinning a nonce range.
Nothing more, nothing less.

Implementing the bitocoin protocol of dealing with merkle trees and changing the coinbase is not what some ASIC company would consider doing unless they were looking to spend a lot of money every time that had to rewrite that (when their current hardware becomes door stops)

The device are simple - hard wired into silicon - they do the hash process and cycle the nonce.
legendary
Activity: 1652
Merit: 2301
Chief Scientist
I thought the consensus was that the mining devices just need a little extra software onboard to increment extranonce and recompute the merkle root.

I don't know nuthin about hardware/firmware design, or the miner<->pool communication protocols, but it seems to me that should be pretty easy to accomplish (the device will need to know the full coinbase transaction, a pointer to where the extranonce is in that transaction, and a list of transaction hashes so it can recompute the merkle root).
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
So meanwhile - 3 months later - and no one has actually really done anything with any sort of future consideration about this.

Once these 1TH/s mining devices turn up, they will be mining 232 1-difficulty shares a second - or 4ms a share.
THEY CAN'T MINE HIGHER DIFFICULTY SHARES SINCE THE NONCE SIZE IS ONLY 32 BITS

Now I'm not sure who thought that's OK, but sending 60 bytes of work to a USB device over serial at 115200 baud takes ... ~5ms - so with a device that was able to hash a nonce range at 1TH/s we are already well beyond a problem into:
More than half your mining time (if it was a single device) would be spent doing ... nothing.

Of course 1TH/s rigs are only a few months away ........ though they probably don't hash a nonce range that fast ...
Give it a year (if BTC hasn't died due to people ignoring this sort of stuff) the devices will simply be stunted due to the poor nonce implementation in BTC (yeah you all remember MSDOS ...)

Even implementing this is quite straight forward, allow for 2 types of blocks - each with a different version number, the second one to be available on a future date, that has a nonce size of 128 bits instead of 32.
Give it a 3 to 6 month time frame (well we already wasted 3 months doing nothing about it)

The latest bunch of implementations are not even related to a solution to this problem.
No one seems to care about it so I guess we'll hit a brick wall some time in the not too distant future and then the bitcoin devs will suddenly have to hack the shit out of bitcoin and implement a hard fork in a short time frame and screw it up like they've done in the past with soft forks.

Or ... as I mentioned before ... they could show a little forward planning ... but 3 months thrown away so far ...
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Yeah with a 64 bit nonce, if the device is big enough, it can power through 4 billion getworks a second ... but with a single getwork with the change I'm suggesting
... but as I mentioned 128 bit nonce would be best to future proof that for ... a very long time

Then high difficulty shares will also mean that fewer shares will be returned.

The overall result of managing the two properly (bigger nonce and higher difficulty shares) will mean that BTC can handle VERY large network growth for a VERY long time
sr. member
Activity: 389
Merit: 250
I do not understand - the higher-than-1 difficulty shares should handle all this problems, shouldn't they?
That may be part of the solution, but the other problem is that a 4GH/s device can run through an entire getwork (4 billion hashes) in one second (A SC Jalapeno at current specs goes through a whole getwork in ~1.15 seconds). So even very small miners will need to request more work. Higher difficulty shares just lower the number of found shares. The really fast machines like the 1TH/s device powers through 250 getworks per second, so more resources will be required to support these in the future.

Actually, come to think of it, higher shares would just increase variability.
hero member
Activity: 531
Merit: 505
I do not understand - the higher-than-1 difficulty shares should handle all this problems, shouldn't they?
legendary
Activity: 1072
Merit: 1181
As for hard vs soft ... well in April there were multiple >3 block forks due to the issues of updating and multiple release candidates - so really no worse than a hard fork IMO in the number of people hashing on bad forks regularly back then.
That was of course caused by someone putting a poisoned transaction into the network to cause the exact problem that happened and as a result it was extremely similar to a hard fork.

A hard forking change is one that causes an infinite split between old and new nodes, and is in no way comparable to any change we've ever done.
legendary
Activity: 1750
Merit: 1007
Edit: heh, lets see what eleuthria's protocol has to say when he defines it ...

Edit2: while you're at it - kill that retarded design of LP - keeping an idle socket connection open sometimes for more than an hour ... wow deepbit/tycho must have had a bad hangover when they came up with that idea

The new protocol will be based on a single TCP socket connection between the miner and the pool (or the proxy software and the pool).  All data is asynchronous.  Only one package of work will need to be sent from the pool to the miner between updates.  Updates would be either:  Traditional longpolls, or a list of new transactions.  It eliminates the current mess of a protocol where miners open new connections for work requests/submissions, and then hold a separate one open just to get notice of a new block.  Everything will use a single persistent connection.

I'm hoping to have this more formalized before I publish any kind of draft protocol for public comment/changes.  I didn't quite expect as much progress as we've had in the last few hours.  It's been an exciting couple of hours to say the least!
Pages:
Jump to: