Pages:
Author

Topic: Mining protocol bandwidth comparison: GBT, Stratum, and getwork - page 3. (Read 28311 times)

sr. member
Activity: 369
Merit: 250
((...snip...))
With your code commits you reinforce centralization by marginalizing anyone who can't or doesn't want to use GCC and has implementation ideas differing from the ones you espouse.

The marginalization of Windows users and developers (like Casascius) is very glaring example: lots more people would contribute to the project if the core developers would actually download Microsoft Visual Studio Express, purge the code base from GNU-isms and made sure that all future changes developed on Linux don't break Visual Studio builds.

+1

... and that would make signed binaries much easier too:

Bitcoin Signed Binaries (posted May 29, 2011, and STILL never been fixed because I can't be arsed to hack such things against a flaky mingw gcc build)

Edited to add:

((...snip...)) it's not like MSVS is free software - so you're asking free individuals to become less free (by agreeing to MSVS's terms) so that you can avoid using free software? I don't get it.

Visual Studio Express 2012 ---> With Visual Studio Express tools, you can build the next great app for Windows 8, Windows Phone, and the web. Best thing about them? They’re entirely free.

^I'm pretty sure you don't agree with their definition of "free" though, so whatever LOL
legendary
Activity: 2128
Merit: 1073
We have a very suspicious community that will continually check the pools' actions.
No disrespect intended for your thoughtful response, but this made me chuckle.  Deepbit was (is?) frequently attempting to mine forks of its own blocks. It's only Luke's paranoia code that caused anyone to actually notice it.  Look at what happened with with the BIP16 enforcement— discussed and announced months in advance there were popular pools that didn't update for it— for some (e.g. 50BTC) for a long as a month, and they kept hundreds of GH/s of mining during that time. A lot of malicious activity is indistinguishable from the normal sloppiness of Bitcoin services, and what does get caught takes hours (or weeks, months...) for people to widely notice much less respond to... an attack event can be over and done with far faster than that.

The only way to have any confidence that the centralized pools aren't single points of failure is making them so they aren't— so that most malicious activity cause miners to fall over to other pools that aren't misbehaving... but most of those checks can't be implemented without actually telling miners what work they're contributing to.   I'll leave it to Luke to point to the checks he currently implements and sees being implemented in the future.

Quote from: 2112
With your code commits you reinforce centralization by marginalizing anyone who can't or doesn't want to use GCC and has implementation ideas differing from the ones you espouse.
So many possible responses… (1) You must have me confused with someone else— I write portable software. (2) show me this person who "can't" use GCC, (3) but not officially supporting VC is more about being conservative and making the software testable, especially since we have a significant shortage of testing resources, especially on windows (and Microsoft's standards conformance is lackluster to say the least)— that a compiler bug caused miscompilation that ate someone coin but it was fine on the compiler the tester used is little consolation. (4) Although you do seem to have me confused for your employee, as while I'm happy for you to use things I write I'm certainly not writing it to your specifications without a sizable paycheck. (5) But if you'd like to submit some clean portability fixes to the reference client I'll gladly review them (6) though having heterogeneous software is the direct opposite of people blindly copying code with so little attention that they can't bother to do a little work in their environment, so changes that have risk or complexity may not be worth it if they're only for the sake of helping forks and blind copying. (7) Finally, why have you made this entirely offtopic reply here? The thread is about protocols between pools and miners, and has basically nothing to do with the reference client not providing you with MSVC project files.

I'm quoting in full just in case. In my experience the various attempts at disparaging Microsoft software stem from lack of understanding of its capabilities and the requirements it fulfills in many markets. Close to a half of bitcoind code is just a re-implementation of database (but not-really-a-database), of which multitute is available on Windows, together with the plentitude of available APIs. Probably close to a half of bitcoin-qt code is just a re-implementation of a data-bound control that are taught in the first semester of Visual Basic or C# or C++ programming.

I actually admire Gavin Andresen for willingness to simply admit that he's not familiar with various available database/financial APIs and learning them would detract him too much from the short term goals that he sees for Bitcoin.

Anyone can also note that various moral and ideaological opponents of Microsoft suddenly change their stance when talking about Apple. And Apple OSX/iOS is even worse prison-like-system than Microsoft Windows are.

But lets get back to the supposed superior security of the GBT and value that such security provides for Bitcoin. Lets look a the recent successfull double-spends agains SatoshiDICE losing bets. If the GBT was really security-oriented then we would've heard about this from many simultaneous GBT users. But we've heard about this voluntarily from the cheater himself. The list of would-be double-spenders would be freely circulated here on this forum.

I'm having trouble believing anyone who simultaneously claims that he has long-term financial security outlook but is oblivious to the attempts to abuse the protocol happening right now.

Combined with the above is the fact that you seem to pop up in every thread discussing alternate Bitcoin implementations and protocols and start disparaging their authors.

I don't know what are yours and Luke's goals, but I'm getting more and more convinced that they aren't the ones that you are openly defending here. There is a possibility that you are completely incorruptible like Felix Dzerzhinsky and you will show up in your armoured train anywhere you observe some counter-revolutionary activity that is trying to corrupt the ideals of the Bitcoin revolution.

Edit: I may be suffering from a case of mixed revolutions. Substitite Maximilien de Robespierre and guillotine in the above paragraph.
hero member
Activity: 686
Merit: 564
No disrespect intended for your thoughtful response, but this made me chuckle.  Deepbit was (is?) frequently attempting to mine forks of its own blocks. It's only Luke's paranoia code that caused anyone to actually notice it.
Actually, I and a number of other people noticed that Deepbit was broken entirely by accident because it broke MPBM's code for detecting stale work every time it started mining two different forks.
legendary
Activity: 2128
Merit: 1073
This is why I, and most other Bitcoin developers, work to improve standardization and make Bitcoin more friendly to multiple competing implementations maintained by different people. Remember that GBT is the standard developed by free cooperation of many different people with different projects, whereas stratum was the protocol developed behind closed doors by a single individual.
Stratum family of protocols was not developed behind closed doors. The development started right here with a lively discussion:

https://bitcointalksearch.org/topic/stratum-overlay-network-protocol-over-bitcoin-55842

From the very beginning it was ignored by the core development team because of its key features:

1) allows a clean break from the polling only (RPC) behaviour deeply embedded into the archotecture of the Satoshi's client.

2) shows a way forward that lies after abandoning the legacy of long-poll (one of the most horrible hacks in the Bitcoin millieu) and correctly using two-way transport ability of a single TCP/IP socket.

Those things must have been a complete cultural shock to the core development team: a protocol that not only acknowledges the essential asynchronicity of Bitcoin network, but exploits it to reduce the network resource usage.

I'm going to assume that developers still living in the world where everything has to be polled for will spare no means to try to disparage anyone from outside of their group.

Bitcoin is essentially asynchronous and anyone who tries to hide that fact is going to be working on a very bad software project. Completely asynchronous implementations like 0MQ or FIX are currently too advanced for an average Bitcoin implementer. Stratum is somewhere inbetween and is a way for the average Bitcoin implementer to learn modern software architecture.

legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Lots of good discussion, and I love the flip-flop point-counterpoint going on here.

Third, there has still not been any valid explanation for how sending the miner all the transactions in their entirety actually improves the security of the network. Unless someone is running mining software and a local bitcoin node and is comparing the values from each, and then decides what transactions overlap, and what are valid transactions the pool has chosen to filter out, and then determined that the data is enough of the transaction list to be a true set of transactions, having the transactions does not serve any purpose. Pool security validity software should be developed that does this, and people should use said approach if they care to confirm the pool is behaving. luke's software does not remotely do this to verify the transactions. It simply grabs them and then if it doesn't get them claims the pool is doing something untoward if it doesn't match the template. The transactions could be anything. It also completely ignores the fact that the vast majority of miners run their mining software on machines that aren't actually running bitcoin nodes with which to check the transactions. While the main bitcoin devs might wish this to be the case, it's not remotely the reality and is not going to become it.

Anyone care to comment on this? We can argue about transaction times and network usage, but those are all trivial compared to this. The above issue has to be addressed first, and then the little stuff can be worked out (in whichever protocol and proper implementation of said protocol).
Or it could be admitted by the GBT zealots that anyone who can program can write a program to do exactly this without even needing the miner to do it.
It's just a few simple lines of code to hook a whole module to do whatever you like to hook into bitcoind to see what transactions are available and what transactions were put into each block.

Not only that, but doing this will easily point out all these imaginary evil pools that gmaxwell and his boss Luke-Jr say exist and make it blatantly simple to say that a pool is doing these nefarious things and thus anyone would have the simple proof to bring out the light of truth and salvation about these evil pools ... and then everyone mining on them would shout praise be to Luke for being the BTC saviour - everyone go crucify the saviour ... Tongue

However, there is also a complete and major fallacy with the idea that either of these methods will completely resolve any such security problem.
They both have the same problem of not knowing the list of transactions that any particular bitcoind contains ... ... ...
legendary
Activity: 952
Merit: 1000
Lots of good discussion, and I love the flip-flop point-counterpoint going on here.

Third, there has still not been any valid explanation for how sending the miner all the transactions in their entirety actually improves the security of the network. Unless someone is running mining software and a local bitcoin node and is comparing the values from each, and then decides what transactions overlap, and what are valid transactions the pool has chosen to filter out, and then determined that the data is enough of the transaction list to be a true set of transactions, having the transactions does not serve any purpose. Pool security validity software should be developed that does this, and people should use said approach if they care to confirm the pool is behaving. luke's software does not remotely do this to verify the transactions. It simply grabs them and then if it doesn't get them claims the pool is doing something untoward if it doesn't match the template. The transactions could be anything. It also completely ignores the fact that the vast majority of miners run their mining software on machines that aren't actually running bitcoin nodes with which to check the transactions. While the main bitcoin devs might wish this to be the case, it's not remotely the reality and is not going to become it.

Anyone care to comment on this? We can argue about transaction times and network usage, but those are all trivial compared to this. The above issue has to be addressed first, and then the little stuff can be worked out (in whichever protocol and proper implementation of said protocol).
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
The "Bytes Received" counter is cgminer is simply what CURL tells us it has (sent and) received.

If you think there is some problem with that - then feel free to point out what it is - or go bug the CURL developers and tell them their code is wrong.
No, you failed to read cURL's documentation, which describes what it actually does. cURL does not have a counter for bytes sent/received on the network.
...
No, you failed (as usual you said nothing yet again) to state that your issue is that you may be compressing the data thus the network bytes may (or may not) be compressed.

However, if you bothered to read the curl documentation yourself, it says:
Quote
CURLINFO_SIZE_UPLOAD
Pass a pointer to a double to receive the total amount of bytes that were uploaded.

CURLINFO_SIZE_DOWNLOAD
Pass a pointer to a double to receive the total amount of bytes that were downloaded. The amount is only for the latest transfer and will be reset again for each new transfer.

So ... yes I can add another counter for network bytes which MAY be different depending upon the pool and the compression options available.
I'll add it soon.

The current counter is still correct.

You're still hiding things.
Typical and common.
You know this depends on a number of settings, but don't want people to know that they may well still be sending and receiving 50x the data with GBT vs Stratum depending on those settings.
legendary
Activity: 2576
Merit: 1186
Note that while kano is correct that updating the template less often means a delay of the first confirmation of transactions sent in that time period, it's nowhere near the relevance he claims. The difference between 30 seconds and 120 seconds* is 90 seconds or a minute and a half. That's the extent of the confirmation delays as well. The fact that blocks can take that much time just to propagate the network makes it even less relevant. If "oh no, I sent my transaction 90 seconds too late!" is more of a concern to you than "oh no, someone double-spent to me and I'm scammed!" then I don't know what to say.

* No, he isn't right about stratum v GBT update times being 30/120 - that's entirely pool dependent really (and none I know of are as high as 120 in practice). I'm just using it as a worst-case example.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
.... you are increasing transaction confirm times by 4 times with GBT versus Stratum.....

... blah blah blah ...

-wk

P.S. - The OP was actually about bandwidth usage, not confirmation times... FWIW, I'm at 0.49 shares per 2KB with stock-settings bfgminer on Eligius using GBT with my four x6500s (~1.5GH share acceptance rate), after about 10 hours.... which is actually better than Luke-Jr's result, for whatever reason.  Seems stock setting is a scantime of 60 seconds and expiry of 120 seconds... which IIRC is the same as cgminer.
No it's not the same - YRI

Again read what you copied from me: "increasing" - yeah tricky word that.

As ckolivas pointed out above, yes I'm referring to how long you think work should be worked on before getting new work with possibly new transactions.

How long you have work determines exactly that, your effect on transaction confirm times.
Stratum on the original stratum pools and GBT in cgminer use the figure of 30s to decide when new work should be sent/received.
BarbieMiner GBT uses 120s (not 80s) thus his GBT increases it by 4x what cgminer does.
legendary
Activity: 2576
Merit: 1186
The "Bytes Received" counter is cgminer is simply what CURL tells us it has (sent and) received.

If you think there is some problem with that - then feel free to point out what it is - or go bug the CURL developers and tell them their code is wrong.
No, you failed to read cURL's documentation, which describes what it actually does. cURL does not have a counter for bytes sent/received on the network.

Simply saying it is wrong, oddly enough, doesn't mean much to me, coz I have had to deal with your blatant lies so often I take no credence in anything you say without clear proof.

Even though I do provide details of my arguments to you regularly, you usually provide none at all, so I certainly have no interest in your statements without any proof, since most of your arguments are clearly FUD.
You're projecting.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
...
And lastly, kano suggests that LJR jimmied the test to make his protocol look better.
Kano is, as usual, a liar. Ironically, he added a "Bytes Received" counter to cgminer git that significantly misreports bandwidth usage. I reported this to him, but apparently all he cares about is making GBT look bad.

Edit: When I posted the OP, I was actually expecting a "haha, I told you so" response from Kano. But seriously, he is extremely agreement-resistant.
Of course I am agreement-resistant when you fudge the results.
I don't care if the results make you look good or bad.
Anyone who posts results that are rigged should expect a negative response from all but their lapdogs.

The "Bytes Received" counter is cgminer is simply what CURL tells us it has (sent and) received.

If you think there is some problem with that - then feel free to point out what it is - or go bug the CURL developers and tell them their code is wrong.

Simply saying it is wrong, oddly enough, doesn't mean much to me, coz I have had to deal with your blatant lies so often I take no credence in anything you say without clear proof.

Even though I do provide details of my arguments to you regularly, you usually provide none at all, so I certainly have no interest in your statements without any proof, since most of your arguments are clearly FUD.
legendary
Activity: 2576
Merit: 1186
With your code commits you reinforce centralization by marginalizing anyone who can't or doesn't want to use GCC and has implementation ideas differing from the ones you espouse.

The marginalization of Windows users and developers (like Casascius) is very glaring example: lots more people would contribute to the project if the core developers would actually download Microsoft Visual Studio Express, purge the code base from GNU-isms and made sure that all future changes developed on Linux don't break Visual Studio builds.
Um, why should GCC users be responsible for MSVS compatibility? The reason the codebase(s) aren't compatible is because there are no developers who want to deal with MSVS. Also, it's not like MSVS is free software - so you're asking free individuals to become less free (by agreeing to MSVS's terms) so that you can avoid using free software? I don't get it.

But there is another avenue to such surreptitious centralization: a core development team that is full of "not invented here" and "our way or highway".
This is why I, and most other Bitcoin developers, work to improve standardization and make Bitcoin more friendly to multiple competing implementations maintained by different people. Remember that GBT is the standard developed by free cooperation of many different people with different projects, whereas stratum was the protocol developed behind closed doors by a single individual.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
...
The thread is about protocols between pools and miners,
...
Pity that seems to have not been of importance to either you or your boss Luke-Jr
staff
Activity: 4284
Merit: 8808
We have a very suspicious community that will continually check the pools' actions.
No disrespect intended for your thoughtful response, but this made me chuckle.  Deepbit was (is?) frequently attempting to mine forks of its own blocks. It's only Luke's paranoia code that caused anyone to actually notice it.  Look at what happened with with the BIP16 enforcement— discussed and announced months in advance there were popular pools that didn't update for it— for some (e.g. 50BTC) for a long as a month, and they kept hundreds of GH/s of mining during that time. A lot of malicious activity is indistinguishable from the normal sloppiness of Bitcoin services, and what does get caught takes hours (or weeks, months...) for people to widely notice much less respond to... an attack event can be over and done with far faster than that.

The only way to have any confidence that the centralized pools aren't single points of failure is making them so they aren't— so that most malicious activity cause miners to fall over to other pools that aren't misbehaving... but most of those checks can't be implemented without actually telling miners what work they're contributing to.   I'll leave it to Luke to point to the checks he currently implements and sees being implemented in the future.

Quote from: 2112
With your code commits you reinforce centralization by marginalizing anyone who can't or doesn't want to use GCC and has implementation ideas differing from the ones you espouse.
So many possible responses… (1) You must have me confused with someone else— I write portable software. (2) show me this person who "can't" use GCC, (3) but not officially supporting VC is more about being conservative and making the software testable, especially since we have a significant shortage of testing resources, especially on windows (and Microsoft's standards conformance is lackluster to say the least)— that a compiler bug caused miscompilation that ate someone coin but it was fine on the compiler the tester used is little consolation. (4) Although you do seem to have me confused for your employee, as while I'm happy for you to use things I write I'm certainly not writing it to your specifications without a sizable paycheck. (5) But if you'd like to submit some clean portability fixes to the reference client I'll gladly review them (6) though having heterogeneous software is the direct opposite of people blindly copying code with so little attention that they can't bother to do a little work in their environment, so changes that have risk or complexity may not be worth it if they're only for the sake of helping forks and blind copying. (7) Finally, why have you made this entirely offtopic reply here? The thread is about protocols between pools and miners, and has basically nothing to do with the reference client not providing you with MSVC project files.
legendary
Activity: 2128
Merit: 1073
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
Firstly, the example data is contrived and demonstrates nicely the confusion between technology and application. The test data does not demonstrate the bandwidth data difference intrinsic between stratum getwork and gbt, but the application of said technologies within luke-Jr's software within the constraints of his particular test. The intrinsic bandwidth requirements of each protocol can easily be calculated, and then I implore users who care about bandwidth to test it for themselves - with cgminer. Either using traditional tools or using the cgminer API  to get the results as Kano suggested.

Second what Kano is talking about when he says "transaction times", I believe he is referring to how often a block template for GBT, or a set of merkle branches for stratum, based on current queued transactions is being received by the mining software. If said template is sent once every 30 seconds on average by stratum, and received every 120 seconds on average by GBT, there are potentially 90 seconds more worth of transactions that never make it into the next block solve, in the way it's implemented in luke's software. In cgminer I receive the template every 30 seconds with GBT to match that of stratum. Only when the protocols are "equal" in terms of their likelihood of perpetuating transactions (since this is what bitcoin is about) should the bandwidth be compared. Pretending the bandwidth doesn't matter when one implementation can use over 100MB per hour is just naive.

Third, there has still not been any valid explanation for how sending the miner all the transactions in their entirety actually improves the security of the network. Unless someone is running mining software and a local bitcoin node and is comparing the values from each, and then decides what transactions overlap, and what are valid transactions the pool has chosen to filter out, and then determined that the data is enough of the transaction list to be a true set of transactions, having the transactions does not serve any purpose. Pool security validity software should be developed that does this, and people should use said approach if they care to confirm the pool is behaving. luke's software does not remotely do this to verify the transactions. It simply grabs them and then if it doesn't get them claims the pool is doing something untoward if it doesn't match the template. The transactions could be anything. It also completely ignores the fact that the vast majority of miners run their mining software on machines that aren't actually running bitcoin nodes with which to check the transactions. While the main bitcoin devs might wish this to be the case, it's not remotely the reality and is not going to become it.

We have a very suspicious community that will continually check the pools' actions. Yes history has shown pools may do something untoward - though it's usually about scamming miners with get rich quick schemes and not about harming the bitcoin network. If security was to be forced upon the miners by bitcoin itself, then a valid non-pooled mining solution should exist within the client itself that does not require people to have at least 1% of the network to be feasible. Solo mining is pointless for any sole entity any more. Unless bitcoin devs decide that something resembling p2pool makes its way into bitcoind, allowing miners of any size to mine, then pooled mining is here to stay. If or until that is the case, pooled mining is a proven solution that the miners have embraced. P2pool is great in principle but the reality is it is far from a simple plug in the values and mine scenario that miners use pools for. Pretending mining should be dissociated from bitcoin development just makes it even more the job of the pools to find solutions to mining problems as they arise. GBT failed to provide a solution early enough and efficient enough in the aspects that miners care about and stratum came around that was much more miner focussed. You can pretend that GBT is somehow superior but there isn't a single aspect that makes it attractive to miners or pool ops, and people are voting with their feet as predicted.
staff
Activity: 4284
Merit: 8808
The whole post above is well argued. But I wanted to highlight just this fragment.
When you "mine" using getwork or unsecured-stratum you aren't a miner from the perspective of Bitcoin. You're just more or less blindly selling computing time to an untrusted third party (the pool) who is using it to mine.
This is a standard dialectic argument used when software vendor tries to disparage software as a service (SaaS) vendor.

Luke runs a service, though its not much of a money making service (as his pool doesn't take a cut of the subsidy).  I don't— but I don't earn a cent from working on Bitcoin (except for a few donations here and there— which I've had none in months).  I don't think this has squat to do with motivations, but if if you're looking to disparage _someone_ based on motivations you're looking at the wrong parties.

Bitcoin is a distributed and decentralized system.  You can make any distributed system non-distributed by just having some centralized parties run it.   If that is what you want— Paypal is a _ton_ more efficient— all this trustless distributed support has costs, you know—, and more carefully audited and trustworthy than basically _any_ centralized Bitcoin service.  I happen to think that Bitcoin is worth having, but only worth having if it offers something that more efficient options can't, without decentralization bitcoin is just a scheme for digicoin pump and dumpers to make a buck. So, looking out for Bitcoin's security is my only  motivation on this subject. People using GBT or verified stratum for mining against a centralized pool aren't even running any software I wrote, and they're certainly not paying me for it.

I'm quite happy that people provide services for Bitcoin, even though services have centralization risk. But in the community we should demand that things be run in a way which minimally undermines Bitcoin and we should reject the race to the bottom that would eventually leave us with something no better than a really byzantine implementation of paypal.

Quote
Do I want to delve into programming, compiling and reinstalling every time I need to change rules in my mining operation?
And yet this isn't a question anyone is forced to ask themselves, even P2Pool users. And at the same time, people on centralized mining setups still need to maintain their software (e.g. it was pretty halrious to watch big pools lose half their hashrate when phoenix just stopped submitting blocks at some difficulties)
legendary
Activity: 2576
Merit: 1186
So let me get this straight, this whole debate over GBT vs stratum boils down to the issue of "network security"?
Not the entire issue, that's just one important difference. GBT is also more flexible in terms of things like miners being able to manipulate the blocks they mine and pools being able to setup their own failover/load balancing. There's also GBT Full Node Optimization which I wasn't able to test since no miner implements it yet, which would enable miners to use their local bitcoind to cut down even more on bandwidth use without sacrificing network security.

I'm all for network security, but I usually leave the details for people smarter than me. However, I have to ask the same questions as Kano: Does having every miner on a pool verify the transactions actually increase security?
It decentralizes the block creation. With every miner checking their blocks, a pool would be noticed trying to perform any attacks and miners could switch to another pool or (with BIP 23 Level 2 support) make blocks that aren't performing the attack.

And also how does mining at a centralized pool using GBT = decentralized?
Centralized vs decentralized lies in where the blocks are being created/controlled. With GBT, blocks are always ultimately controlled by the miner (there are some rules on what the miner can do, but this is necessarily part of any pool, centralized or not).

However, I have to take into account 3 things: LJR is the only person I've seen advocating these changes. LJR is also the author of GBT and BFGminer, the competing solution to the problem he seems to be the only person caring about.
While I am the "BIP champion" of BIPs 22 and 23, the GBT protocol itself is the combined work of many authors over the course of 6 months in 2012. Adoptions isn't that bad, either.

And lastly, kano suggests that LJR jimmied the test to make his protocol look better.
Kano is, as usual, a liar. Ironically, he added a "Bytes Received" counter to cgminer git that significantly misreports bandwidth usage. I reported this to him, but apparently all he cares about is making GBT look bad.

Edit: When I posted the OP, I was actually expecting a "haha, I told you so" response from Kano. But seriously, he is extremely agreement-resistant.
legendary
Activity: 2128
Merit: 1073
The whole post above is well argued. But I wanted to highlight just this fragment.
When you "mine" using getwork or unsecured-stratum you aren't a miner from the perspective of Bitcoin. You're just more or less blindly selling computing time to an untrusted third party (the pool) who is using it to mine.
This is a standard dialectic argument used when software vendor tries to disparage software as a service (SaaS) vendor.

In this case we have Luke-Jr & gmaxwell as conventional software vendor. They hawk their complex software and the updates to it. Although the software is open-sourced it is so complex and obfuscated that only very few will be able to sensibly maintain it. The service component (PoW pooling) is pushed behind.

slush, ckolivas and kano together form a SaaS coalition. They hawk first of all their service and software part is the simplest possible required to relay the work that needs to be performed.

The "who's better" decision shouldn't be made on the bandwidth cost alone. The most rational discriminator is each miner's administrative overhead costs and skills. Simply speaking you the question you want to ask yourself is: Do I want to delve into programming, compiling and reinstalling every time I need to change rules in my mining operation?
staff
Activity: 4284
Merit: 8808
So let me get this straight, this whole debate over GBT vs stratum boils down to the issue of "network security"? I'm all for network security, but I usually leave the details for people smarter than me. However, I have to ask the same questions as Kano: Does having every miner on a pool verify the transactions actually increase security? And also how does mining at a centralized pool using GBT = decentralized?

I suspect you've already reached a number of conclusions and that I'm just wasting my time responding—  since you're already ignoring the points upthread, but in case I'm incorrect:

GBT was originally created by ForrestV (under an the original name, getmempool) so that p2pool could exist.  Today _all_ current pool software uses it to talk to bitcoind.   Luke took up the task of writing the formal specification for it and made a number of important improvements to close some denial of service risks and to make it more useful for general pool software, and because some of the updates weren't completely compatible and because the old name was confusing it was renamed.  GBT is part of the bitcoin ecosystem, developed cooperatively in public, reviewed and used by many people, part of the reference software, with a clear publicly agreed specification.  It's not just "luke's thing".

Quote
It seems to me that if these "security" issues are actually a non-issue, than stratum + cgminer = the best solution. I say cgminer because LJR admitted to having to edit his program just to use the same stratum implementation as cgminer. We get the same amount of "security" as the old GW, but with far less network usage and far greater scalability. On the other hand, if these issues are actually legitimate, than it would appear that GBT is the way to go, as it does use a little less network usage.
When you "mine" using getwork or unsecured-stratum you aren't a miner from the perspective of Bitcoin. You're just more or less blindly selling computing time to an untrusted third party (the pool) who is using it to mine.  If you were to call yourself a miner in that model we might as well say that AMD is a miner with 90% of the hashpower, declare bitcoin a failure, and go home. Smiley

The security of bitcoin really depends on this assumption that mining— real mining, not just people selling cpu cycles— will be _well_ distributed, and that the people running it will have a long term investment in bitcoin (e.g. mining hardware) that would be damaged by doing anything hostile like helping someone double spend. If it's not well distributed there is a lot of risk that attackers can take over bit chunks by hacking systems or holding guns to people's heads. If the miners are not invested in Bitcoin then they could get short term returns by maliciously mining. These risks exist even if they don't allow the attacker to get a majority hash power, "a lot" is enough to steal from people especially if combined with network isolation attacks (see the bitcoin paper on attack success probability as a function of hashrate). People hype "51%" but >50% is just when the attack success probability becomes 100% (assuming infinite attack duration). Especially with the subsidy down to 25 BTC a high value attack doesn't have to have high odds of success to be very profitable.

If these assumptions— that actual mining (validation) is distributed and the miners are economically motivated to be honest— is incorrect then bitcoins— and mining hardware— should be worthless.  

So, big centralized pools are a terrible thing for bitcoin. They violate the assumptions, and even if the poolops themselves are honest people (well, lots of people thought pirate was honest...) they are still an easy target for attack electronically or at gunpoint. But pools are easy and convenient and people are short-sighted and hope that someone else does the work to keep bitcoin secure while they just do the easiest thing.

For a while the core developers have been strongly encouraging people to mine using p2pool which solves these (and other) issues.  Luke's pool was already popular with the more-geeky crowd and it lost a number of big miners to P2pool.   Rather than denying the reality of the above serious concerns Luke looked for a way of combining the advantages— make mining that is as easy to use and as flexible in payout schemes as fully centralized pools, but not a danger to the Bitcoin ecosystem. He realized that the getmempool call that p2pool and other poolservers were already usng could be used by miners directly, allowing the miners to "trust but verify" and substantially limiting the attacks a malicious party controlling a pool could perform.

(Luke has also done some clever work in BFGminer to decrease the exposure even with getwork, for example it watches the previous block in the block header and prevents pools from asking you to mine a fork against blocks you previously mined. ... but getwork simply doesn't have enough information to do much better validity checks than that)

Con is a brilliant coder, by everyone's acknowledgement, but he's also a little bit coin operated. Luke can be difficult to deal with at times— but he's thinking long term and he's concerned with the future viability of Bitcoin for everyone.  When you delegate _Bitcoin's_ security to either Con or Luke, you might be delegating it to people smarter than you either way— but out of the two only in the case of Luke are you delegating it to someone who's actually demonstrated that they've thought mught about it or _care_ much about it.

People could debate if a centralized pool using GBT and users with BFGminer is "decenteralized" or not or GBT-mining's merit relative to p2pool. But debates over terms are boring, and a comparison to p2pool is somewhat apples to oranges (I say p2pool is more decentralized, especially since the poolops can still cheat miners out of payments in the non-p2pool options, and Luke argues with me).  What matters is that properly used GBT based mining can close many of the risks to Bitcoin of centralized pools. And fortunately slush was convinced with stratum to add extensions to pick up many of the same advantages.

Yes, getting extra data to validate the work takes some more bandwidth but all the bandwidths involved in mining (or bitcoin for that matter) are small. Because GBT style mining allowed massive bandwidth reductions for high speed miners many people had been just expecting people to switch to it and never notice the increases because they were paid ten times over by the improvement, but then slush showed up with stratum which had the efficiency but not (initially) the security improvements of GBT... In any case, the bandwidths involved are so small that they shouldn't be relevant to almost any miners (esp since they don't increase much with increased hashrate). It may be relevant to mining pools— but miners are paying for their services and ought to demand the best behavior for their long term future.
Pages:
Jump to: