Pages:
Author

Topic: Getting rid of pools: Proof of Collaborative Work - page 3. (Read 1861 times)

member
Activity: 168
Merit: 47
8426 2618 9F5F C7BF 22BD E814 763A 57A1 AA19 E681
Hi, I have a stupid question, for sure I'm missing something,


  • Finalization Block: It is an ordinary bitcoin block with some exceptions
    • 1- Its merkle root points to a  Net Merkle Tree
    • 2- It is fixed to yield a hash that is as difficult as target difficulty * 0.02
    • 3- It has a new field which is a pointer to (the hash of) a non empty Shared Coinbase Transaction
    • 4- The Shared CoinBase Transaction's sum of difficulty scores is greater than or equal to 0.95

I cannot see any reward for finalization block.
where is the incentive to to mine a finalization block?
legendary
Activity: 988
Merit: 1108
I am not much of a probability theory expert, but for now, I'm persuaded about @tromp calculations:

NOTE that you overlooked my fix where ln(n) should instead be ln(1/mindiff).
legendary
Activity: 1456
Merit: 1174
Always remember the cause!
I am not much of a probability theory expert, but for now, I'm persuaded about @tromp calculations:
For any minimum relative difficulty* set for every share, mindiff, the average number of shares per (Finalized) block, n, would satisfy this formula : n*ln(n)*mindiff = 1

* minimum relative difficulty is the ratio by which PoCW reduces the calculated network difficulty and is the least difficulty allowed for submitted shares.

it yields n * ln(n) = 1/mindiff and suggests a better than scalable dependency between the two. i.e. a decrease in mindiff (more utilization) causes a better than linear growth in the avaerage number of shares.

Notes:
1-  @tromp assumption about shares being not overly lucky enforces this formula even more (i.e. the average weight can be a bit higher, hence n is always a bit smaller )

2- The exact sum of n shares according to this protocol is 0.93 instead of 1 (there is one fixed %2 share for the finalized block itself) so the formula should be modified for n to satisfy n*ln(n)*mindiff = 0.95

I just did some trivial calculations using the latter formula
For mindiff being set to 10^-4, we will have n < 1,320  
For 10^-5 we have n<10,300
For  10^-6 we have n< 83,800

It is so encouraging: Setting difficulty for shares to a minimum of one million times easier than network difficulty we need only 83,800 shares per block as an average instead of 1320 for current 0.0001. Note that the difficulty is already reduced by a factor of 10,  as a result of decreased block time to one minute and we are talking about 10 million times utilization compared to currently proposed 100 thousand times.

And yet we don't have to decrease the mindiff (currently set to 10^-4) in such a strict way, instead we would prefer  moderate adjustments, an even more promising situation.

Based on this assessments, it is assertable that Proof of Collaborative Work, is scalable and can achieve its design goal,  despite constant growth in network hashrate and difficulty indexes by a better than linear increase in demand for computing and networking resources (and no increase in other resources). The design goal is keeping difficulty of shares low enough to help average and small miners in participating directly in the network without being hurt by  phenomenons such as mining variance or their inherent proximity disadvantage. Fixing one of the known flaws of PoW, mining pressure.

I guess we might start thinking about a self adjustment algorithm for mindiff (the minimum difficulty figure for issued shares).
No rush for this tho, the core proposal is open to change and it is just a long term consideration

This hypothetical algorithm should have features such as:

-Not being too dynamic. I think the adjustment shouldn't happen frequently, once every year I suggest.

-Not being linear. The increase in network hashrate is about introducing both new investment by miners and improved efficiency of mining hardware. Both factors, specially the latter, suggest that we don't have to keep too small facilities competitive artificially by subsidizing them. We are not Robin Hood and we shouldn't be.

So, our algorithm should "dump" the impact of difficulty increase instead of covering it.
It would help the network to upgrade smoothly.
A factor of 30% to 50% adjustment, as a result of 100% increase in target difficulty, seems more reasonable to me than just an exact proportional compensation for new difficulty.




legendary
Activity: 988
Merit: 1108
Solid.If you don't mind will include this formula and the proof in the white paper, if you don't mind.

I don't mind, as long as you consider the edits i made to fix some errors.
legendary
Activity: 1456
Merit: 1174
Always remember the cause!
On further reflection, if you randomly accumulate shares of weight (fraction of required difficulty) >= 10^-4 until their sum weight exceeds 1, then the expected number of shares is 5000.

I calculated wrong.

n shares expect to accumulate about n*ln(n)*10^-4 in weight, so we expect
a little under 1400 shares to accumulate unit weight...
Interesting, appreciate it if you would share the logic beyond the formula.

Consider a uniformly random real x in the interval [10^-4,1]
Its expected inverse is the integral of 1/x dx from 10^-4 to 1, which equals ln 1 - ln (10^-4) = 4 ln 10.

Now if we scale this up by 10^4*T, where T is the target threshold, and assume that shares are not lucky enough to go below T, then the n hashes will be uniformly distributed in the interval [T, 10^4*T], and we get the formula above.


Solid. Will include this formula and the proof in the white paper, if you don't mind.
legendary
Activity: 988
Merit: 1108
On further reflection, if you randomly accumulate shares of weight (fraction of required difficulty) >= 10^-4 until their sum weight exceeds 1, then the expected number of shares is 5000.

I calculated wrong. Again. Edited for correctness:

n shares expect to accumulate about n * ln(10^4) * 10^-4 in weight, so we expect
a little under 1100 shares to accumulate unit weight...
Interesting, appreciate it if you would share the logic beyond the formula.

Consider a uniformly random real x in the interval [10^-4,1]
Its expected inverse is the integral of 1/x dx from 10^-4 to 1, which equals ln 1 - ln (10^-4) = ln(10^4).

Now if we scale this up by 10^4*T, where T is the target threshold, and assume that shares are not lucky enough to go below T, then the n hashes will be uniformly distributed in the interval [T, 10^4*T], and we get the formula above.

legendary
Activity: 1456
Merit: 1174
Always remember the cause!
Interesting, you put a lot of thought into this proposal.  I would support it and see how it goes.  The goal is really hard to reach.  The idea would be to increase difficulty to scale up operations.  Pool mining can be damaging, but one guy with a huge operation can be worse if no one can pool together.
Thanks for the support.

As of your argument about hardware centralization being more dangerous without pools:

It is so tricky. This proposal is not an anti-pool or pool-resistant protocol, instead it is a fix for pooling pressure.

Iow, it does no prevent people to come together and start a pool , it just removes the obligation for them to join pools (and the big-pool-better-pool implications) the current situation for almost any PoW coin.

EDIT:
It is also interesting to consider the situation with Bitmain. No doubts this company has access to the biggest mining farms ever and yet Bitmain has Antpool and insists on having more and more people pointing their miners to their pool. Why? because it is always better to have more power and be safe against the variance and have a smooth luck statistics.

So, I would say after this fix, there would be not only no pressure toward pooling but also no incentive.
legendary
Activity: 1456
Merit: 1174
Always remember the cause!
On further reflection, if you randomly accumulate shares of weight (fraction of required difficulty) >= 10^-4 until their sum weight exceeds 1, then the expected number of shares is 5000.

I calculated wrong.

n shares expect to accumulate about n*ln(n)*10^-4 in weight, so we expect
a little under 1400 shares to accumulate unit weight...
Interesting, appreciate it if you would share the logic beyond the formula. It would be very helpful. To be honest I have not done too much on it and my initial assumption about 4650 shares is very naive. I was just sure that the number won't be any higher for average number of shares per block.

Thank you so much for your contribution. Smiley
legendary
Activity: 2294
Merit: 1182
Now the money is free, and so the people will be
Interesting, you put a lot of thought into this proposal.  I would support it and see how it goes.  The goal is really hard to reach.  The idea would be to increase difficulty to scale up operations.  Pool mining can be damaging, but one guy with a huge operation can be worse if no one can pool together.
legendary
Activity: 988
Merit: 1108
On further reflection, if you randomly accumulate shares of weight (fraction of required difficulty) >= 10^-4 until their sum weight exceeds 1, then the expected number of shares is 5000.

I calculated wrong.

n shares expect to accumulate about n*ln(n)*10^-4 in weight, so we expect
a little under 1400 shares to accumulate unit weight...
legendary
Activity: 1456
Merit: 1174
Always remember the cause!
SHA256 and other hash functions, are NP-Complete problems: their solutions consume negligible time and resource to be verified, it is basic in "computer science"  Wink

Hash functions are not decision problems, so they cannot be NP-complete.
I could create a decision problem out of a hash function though.
Something relevant for mining would look like:

The set of of pairs (p,y) where
  p is a bitstring of length between 0 and 256,
  y is a 256 bit number,
  and there exists an 256-bit x with prefix p such that SHA256(x) < y

Such a problem is in NP.
But it would still not be NP-complete, since there is no way to reduce other NP problems to this one.


Yes, my mistake to call it NP-complete, it is NP.  In the context of this discussion, when we refer to hash functions, the PoW problem (like one you have suggested, a conditional hash generating problem) is what we usually mean, yet I should have been more precise.

This was posted in a chaotic atmosphere but the point is maintainable that, verifying shares (not the Prepared block or its counterpart in traditional PoW, block) is a trivial job, by definition. Because it needs just verifying an answer for a NP  problem.
legendary
Activity: 988
Merit: 1108
SHA256 and other hash functions, are NP-Complete problems: their solutions consume negligible time and resource to be verified, it is basic in "computer science"  Wink

Hash functions are not decision problems, so they cannot be NP-complete.
I could create a decision problem out of a hash function though.
Something relevant for mining would look like:

The set of of pairs (p,y) where
  p is a bitstring of length between 0 and 256,
  y is a 256 bit number,
  and there exists an 256-bit x with prefix p such that SHA256(x) < y

Such a problem is in NP.
But it would still not be NP-complete, since there is no way to reduce other NP problems to this one.
legendary
Activity: 1456
Merit: 1174
Always remember the cause!
@anonymint

keep cool and remain focused, ...

unfortunately your last post was of no quality in terms of putting enough meals on the table, instead you are continuing your holly war (against what?) with inappropriate language, as usual.

Please take a break, think a while and either leave this discussion (as you promise repeatedly) or improve your attitude,

will be back  Wink
legendary
Activity: 1456
Merit: 1174
Always remember the cause!
  • Verification process involves:
    • Checking both the hash of the finalized block and all of its Shared Coinbase Transaction items to satisfy network difficulty target cumulatively
This is a serious problem with your proposal. The proof of work is not self-contained within the header.
It requires the verifier to obtain up to 10000 additional pieces of data that must all be verified, which is too much overhead in latency, bandwidth, and verification time.[/list]
Shared Coinbase transaction typically is 32 kB data (an average of 4500 items)

On further reflection, if you randomly accumulate shares of weight (fraction of required difficulty) >= 10^-4 until their sum weight exceeds 1, then the expected number of shares is 5000.
It is 0.93 to be exceeded to be exact.
Quote

BUT, the expected highest weight among these shares is close to 0.5 !
(if you throw 5000 darts at a unit interval, you expect the smallest hit near 1/5000)

Yes. To be more exact, as the shares are randomly distributed in the range between 0.0001 up to 0.93,  the median would be 0.465.
Yet it is not the highest difficulty, just the median.
Quote

So rather than summarize a sum weight of 1 with an expected 5000 shares,
it appears way more efficient to just summarize a sum weight of roughly 0.5 with the SINGLE best share.
But now you're essentially back to the standard way of doing things. In the time it takes bitcoin to find a single share of weight >=1, the total accumulated weight of all shares is around 2.

All the overhead of share communication and accumulation is essentially wasted.

As you mentioned, it is more what traditional bitcoin is doing and I'm trying to fix. It is not collaborative and as both theoretically and experimentally  has been shown, is vulnerable to centralization. The same old winner-takes-all philosophy leaves no space for collaboration.

As of the 'overhead' issues, this has been discussed before. Shares are not like conventional blocks, they take a very negligible cpu time to be validated and network bandwidth to be propagated.

EDIT:
I have to take back my above calculations:  Some blocks may have as few as 2 shares and some may have as many as 9301 shares to satisfy the difficulty, this yields an average number of shares to be around 4650 for a large number of rounds. The highest share is (0.93) and the lowest will be 0.0001 no more indexes I've calculated and tried to calculate till now.
legendary
Activity: 988
Merit: 1108
  • Verification process involves:
    • Checking both the hash of the finalized block and all of its Shared Coinbase Transaction items to satisfy network difficulty target cumulatively
This is a serious problem with your proposal. The proof of work is not self-contained within the header.
It requires the verifier to obtain up to 10000 additional pieces of data that must all be verified, which is too much overhead in latency, bandwidth, and verification time.[/list]
Shared Coinbase transaction typically is 32 kB data (an average of 4500 items)

On further reflection, if you randomly accumulate shares of weight (fraction of required difficulty) >= 10^-4 until their sum weight exceeds 1, then the expected number of shares is 5000.

BUT, the expected highest weight among these shares is close to 0.5 !
(if you throw 5000 darts at a unit interval, you expect the smallest hit near 1/5000)

So rather than summarize a sum weight of 1 with an expected 5000 shares,
it appears way more efficient to just summarize a sum weight of roughly 0.5 with the SINGLE best share.
But now you're essentially back to the standard way of doing things. In the time it takes bitcoin to find a single share of weight >=1, the total accumulated weight of all shares is around 2.

All the overhead of share communication and accumulation is essentially wasted.
legendary
Activity: 1456
Merit: 1174
Always remember the cause!
@anunymint
I appreciate the fact that you spend a considerable time on this subject, it is a good evidence for me to become even more convinced that:
1- You have good faith and as of trolling part of your writings, you just can't help it and I should be an order of magnitude more tolerant with you  Smiley
2- You are smart and have been around for a long time, a good choice for chewing a complicated proposal like PoCW. Again, more tolerant, as tolerant as possible and more ... I should repeat and keep it in mind  Wink

I was nearly certain I had already mentioned it up-thread, but couldn’t quickly find it to quote it. So let me recapitulate that your PoCW design proposes to put 10,000 times more (and I claim eventually 100,000 and 1 million) times more proof-of-work hashes in the blockchain that have to be validated.
[/quot]
Nop. It is about 10,000 and will remain in that neighborhood for long,long time to reach 100,000 it will take a century or so! I have already described it:I have no plan and there wont be such a plan to increase this number linearly with the network hashrate.
This proposal, with current parameters is about making solo mining 100,000 times more convenient right now, it is a good improvement regardless of what happens in the next few years and how we should deal with it.
Quote
This is going to make objectively syncing a full node (especially for a new user who just downloaded the entire blockchain since the genesis block) incredibly slow unless they have very high capitalization ASICs. And the asymmetry between commodity single CPUs and mining farms of ASICs will widen, so perhaps eventually it becomes impractical for a CPU to even sync from genesis. So while you claim to improve some facets, your design makes other facets worse. {*}
Unlike what you suggest, ASICs won't be helpful in syncing block chain. Full nodes are not ASICs and don't utilise ASICs ever to validate a hash, they just compute that hash with their own cpu!
SHA256 and other hash functions, are NP-Complete problems: their solutions consume negligible time and resource to be verified, it is basic in "computer science"  Wink


3. I expect your design makes the proximity issue worse. I had already explained to you a hypothesis that the Schelling points in your design will cause network fragmentation. Yet you ignore that point I made. As you continue to ignore every point I make and twist my statements piece-meal.

Good objection.
Shelling points (transition points from Preparation to Contribution and from the Finalization to the next round ) have %7 value cumulatively (%5 for first and %2 for the second point). It is low enough, yet it is not at stake, totally:

For the first %5 part, the hot zone (the miner and its neighboring peers) are highly incentivized to share it asap, because it is not finalized and practically worth nothing as it won't be appreciated if it doesn't get enough support finding its way to finalization. Note that neighbors are incentivized too, as if they want to join the dominating current, they need their shares to be finalized as soon as possible, it needs the Prepared Block to be Populated although it is not their's.

For the second Schelling point, the finalized block found event, with %2 percent block reward value, hesitating to relay the information is in a very high risk of being orphan by means of other competitors(for the lucky miner), and to be mining orphan shares/final blocks (for the peers).

I understand you have some feelings that more complicated scenarios could be feasible but I don't think so and until somebody has not presented such a scenario, we better not to be afraid of it.

I'm aware that you are obsessed with some kind of network being divided because you think selfish mining is a serious vulnerability and/or propagation delay is overwhelming.
Network wont be divided neither intentionally nor as a result of propagation delay, and if you are not satisfied with my assessment of propagation delay you should recall my secret weapon, incentivizing nodes to share their findings, as fast as possible to the extent that they will put it in high priority. They will dedicate more resources (both hardware and software) to the job.

2- Although this proposal is ready for an alpha version implementation and consequent deployment phases, it is too young to be thoroughly understood...

Correct! Now if you would just internalize that thought, and understand that your point also applies to your reckless (presumptuous) overconfidence and enthusiasm.

Here you have 'trimmed' my sentence to do what you are repeatedly accuse me of. I'm not talking about other people being not smart enough to understand me and/or my proposal.
I'm talking about the limitations of pure imagination and discussion about the consequences of a proposal any proposal when it might be implemented and adopted.
Why should you tear my sentence apart? The same sentence that you have continued quoting. Isn't that an act of ... let's get over such things, whatever.
Quote
...for its other impacts and applications, the ones that it is not primarily designed for.  As some premature intuitions I can list:

  • It seems to be a great infrastructure for sharding , the most important onchain scalability solution.
    The current situation with pools makes sharding almost impossible, when +50% mining power is centralized in palms of few (5 for bitcoin and 3 for Ethereum) pools, the problem wouldn't be just security and vulnerability to cartel attacks, unlike what is usually assumed, it is more importantly a prohibiting factor for implementing sharding (and many other crucial and urgent improvements).
    If my intuition might be proven correct, it would have a disruptive impact on the current trend that prioritizes off chain against on chain scalability solutions.
  • This protocol probably can offer a better chance for signaling and autonomous governance solutions

In the context of the discussion of OmniLedger, I already explained that it can’t provide unbounded membership for sharding, because one invariant of proof-of-work is that membership in mining is bounded by invariants of physics. When you dig more into the formalization of your design and testing, then you’re going to realize this invariant is inviolable. But for you now you think you can violate the laws of physics and convert the Internet into a mesh network. Let me link you to something I wrote recently about that nonsense which explains why mesh networking will never work:

https://www.corbettreport.com/interview-1356-ray-vahey-presents-bitchute/#comment-50338
https://web.archive.org/web/20130401040049/http://forum.bittorrent.org/viewtopic.php?id=28
https://www.corbettreport.com/interview-1356-ray-vahey-presents-bitchute/#comment-50556
I'll check your writings about sharding later, thanks for sharing. But As I have mentioned here, these are my initial intuitions and are provided to show the importance and beauty of the proposal and opportunities involved. I just want to remind that how pointless would be to just fighting with it, instead of helping to improve and implement it.
A thorough analysis of the details suggested in the design, would convince non-biased reader that this proposal is thought enough and is not that immature to encourage anybody to attempt a slam dunk and reject it trivially, on the contrary considering the above features and promises, and the importance of pooling pressure as one of the critical flaws of bitcoin, it deserves a fair extensive discussion.

https://www.google.com/search?q=site%3Atrilema.com+self-important
https://www.quora.com/Do-millennials-feel-more-entitled-than-previous-generations/answer/Matthew-Laine-1
https://medium.com/@shelby_78386/if-you-want-the-country-to-be-less-polarized-then-stop-writing-talking-and-thinking-about-b3dcd33c11f1


Now you are just fighting (for what?) ...
You are accusing me to be of this or that personality, being over-confident, ... whatever, instead I suggest you to provide more illuminating points and objections and make me to reconsider parts of the proposal, instead of repeating just one or two objections while you are playing your game of thrones scenes.

well, it was hell of a post to reply. I'll be back to is later.
Cheers
legendary
Activity: 1456
Merit: 1174
Always remember the cause!

I don’t think my argument is weak. I think my analyses of your design is 100% spot on correct. And I encourage you to go implement your design and find out how correct I am! Please do!

You continue to not mention the point I made about incremental validation overhead and accumulated propagation delay and its effect on orphan rate, especially when you have effectively decreased the block period to 15 seconds for the Finality phase Schelling point and 30 seconds for the Prepared block Schelling point.

And you continue to not relate that I also pointed as the transaction fees go to $50,000 with Lightning Networks Mt. Gox hubs dominating settlements in the 1 MB blocks (or pick any size you want which which does not spam the network with low transactions fees because the miners will never agree and unlimited block sizes drive the orphan rate up and break security), then active UTXO will shrink because most people can’t afford to transact on-chain. Thus the MRU UTXO will be cached in L3 SRAM. And the block will have huge transactions and not many transactions. Thus your entire thesis about being I/O bound on transaction validation will also be incorrect.

You can’t seem to pull all my points together holistically. Instead you want to try to cut a few of them down piece-meal out-of-context of all the points together.
I remain silent about the trolling part, I'm realising you can't help it and it is just unintentional behavior of a  polemicist when things get too intense.

Let's take a look at  technical part of your reply:
1- There is no incremental overhead, I've never mentioned any incremental increase/decrease (enforced by the protocol or by scheduled forks) in the proposed parameters including the relative difficulty of contribution shares . I have to confess, tho, I'm investigating this possibility.
Will keep you informed about the outcome which will not be a simple linear increase with network hashpower, anyway.

2- Also propagation delay won't accumulate even if we might increase (incrementally or suddenly) the driving factors behind the number of contribution shares because validation cost is and remains negligible for nodes. Remember? The client software is I/O bound and contribution share validation is cpu bound(I'll come to your new objection about it later).

3- I am not 100% against your analysis of lightning or in favor of it, I'm not that much interested or believer in LN as a scaling solution, but it won't help your position in this debate:

Your arguments:
You are speculating that transactions will go off chain in the future and the main chain will be busy processing huge transactions produced by flush operations in LN nodes and at the same time network nodes will manage to keep the UTXO (its most Recently Used Part) in SDRAM and it helps them not to access HD frequently and so, they will be no more I/O bound (relatively) and this way the processing overhead of contribution shares begins to look more important and will become to be a bottleneck eventually. Right?

Answer:
  • You are speculating TOO much here, my perception of LN and off chain solutions differs moderately
  • Having MRU UTXO in SDRAM cache won't help that much, task would remain halted for RAM access and yet would access HD for page faults and most importantly for writing to UTXO after the block has been verified
  • Also, a relative improvement to node's performance in validating full blocks is not a disaster,  the number of blocks is the same as always

4- As of your expectation from me not to cutting your objections down to pieces is like asking me to troll against a troller. On the contrary I prefer to go more specific and resolve issues one by one. On the contrary you want keep the discussion in the ideological level, being optimistic or pessimistic about this or that trend or technology and so on, ... I think in the context of making assessments about a proposed protocol my approach is more practical and useful.

Firstly, doubling or tripling the number of shares don't make significant problem in terms of share validation costs, it is yet a cpu bound process, some very low profile nodes may require $200 or so to buy a better processor, in the worst case.

You’re ignoring that I argued that your thesis on transaction validation bounded validation delay will also change and the validation of the shares will become incrementally more relative. And that taken together with the 15 second effective Schelling points around which orphaning can form. You’re not grokking my holistic analyses.

there will be no order-of-magnitude (=exponential like tens or hundreds of times? ) increase to the network hash power  in foreseeable future and I did you a favor not to simply rejecting this assumption, instead I tried to address more probable scenarios like 2 or 3 times increase in next 2-3 years or so.
Although it is good to see the big picture and take cumulative effects in consideration, but it won't help if you have not a good understanding of each factor and its importance.

You are saying like :
           look! there are so many factors to be considered isn't this terrifying?
No! This is not terrifying as long as we could be able to isolate each factor and understand it deeply, instead of being terrified or terrifying people by it.
legendary
Activity: 1456
Merit: 1174
Always remember the cause!
Why is the thread being derailed by some comments about me? What does anything I did or did not do have to do with the discussion of the technological facts of PoCW.

Really nothing, besides the need to stop you from trolling.

Please define trolling and then show that I did it. Specific links or quotes please.
no need to go so far this post of you is %90+  nothing other than trolling.
he's been claiming for 5-6 years that he's working on a "blockchain breakthrough"

I challenge you to quote from the past where I extensively made such a claim 5 or 6 years ago.

EDIT: {and a long story about what you have been bout in 5-6 years ago}
In any case, I welcome your ridicule. It motivates me. Please do not stop. And please do report me to @theymos so this account can be banned so I stop wasting time posting on BCT.


Like this. Please ... Just put and end to this if you may. You did something inappropriate, and some objections was made about it. Let it go.

What did I do that was inappropriate “5-6 years ago” that was related to “claiming […] he's working on a ‘blockchain breakthrough’”?  Specific links or quotes please.

If you can’t specifically show that the SPECIFIC “5-6 years ago” allegation is true, then you are the one who is trolling by stating the lie, “You did something inappropriate”.

I politely asked you to end this but you love twisting it more ... it is trolling in the specific context we are in .. I was not the one aho said things abut the last 5-6 years of your history, FYI.
My technological comments stand on their own merits regardless what ever is done to cut my personal reputation down.


Absolutely not. You questioned the overhead of the validation process for miners in my proposal and I answered it solidly:

I said my “My technological comments stand on their own merits regardless what ever is done to cut my personal reputation down”.

That does not mean I claim “My technological comments” are unarguable. Only that “my personal reputation” has nothing to do with the discussion of the technology.

It is another and the most important form of troll you commit, repeatedly. Your argument here is void and makes no sense:
Once an objection is made and it proves to be irrelevant or false and the proposal addresses the asserted issues, , it should be dropped and not maintained the way you are putting it, every issue remains open and can be used as a toy by trollers by making false claims whenever they wish to.
There is no overhead because there is no I/O involved because the submitted contribution shares have an exactly the same Merkle root that is already evaluated (once, when the Prepared Block has been evaluated by the miner when he decided to contribute to it afterwards).

I already refuted that line of logic in that the ratios over time have to either increase or the capitalization of the miner within your current 10,000 factor will place them in the crosshairs of having to cowtail to the oligarchy.
And I punted on the entire concept, because I stated mining is becoming ever more centralized so it’s pointless and futile to try to make a protocol for small miners.

one another example of trolling, after you have been clearly informed about negligible costs of validation of shares, instead of closing the case and moving on, you just deny everything by bringing forward a very weak argument to keep the issue open no matter how. You can't help it, you need issues to remain open forever to be used by you for ruining the topic.

In this case, you are saying that future increases in network hash power should be compensated by increasing the number of shares and it will eventually be problematic. IOW, you are saying 2-3 years later the hashrate will probably double and small miners would again experience variance phenomenon, then devs will improve the protocol and double the number of shares by a hard fork and now, this increase would prove that verification of shares is a weakness!

Firstly, doubling or tripling the number of shares don't make significant problem in terms of share validation costs, it is yet a cpu bound process, some very low profile nodes may require $200 or so to buy a better processor, in the worst case.

Secondly, by increases in network hash power, although it is nonlinear, we will have an improvement in mining devices and their efficiency.

Quote
Only a troller misrepresents to the thread what I wrote in the thread as explained above.

Now I am done with you.

Bye.


I should have stuck to my first intuition and never opened the thread. Or certainly never had posted after I read that horrendously bad OP description of the algorithm. That was indicative of the quality of the person I am interacting with unfortunately. I have learned a very important lesson on BCT. Most people suck (see also and also!). And they don’t reach their potential. The quality people are very few and far between, when it comes to getting serious work done.


See? You are offending me, my work, bitcointalk, its members, ... very aggressively at the end of the same post that you are asking for evidence of you being a troll! I can imagine you may reply like this:
"I never told I'm not a troll I've just wanted you to give evidence about it, so I 'maintain' my inquiry for evidence. This issue, me being a troll or not is open just like all other issues we have been arguing about."!

Quote
In a lighter social setting a wider array of people can be tolerated (especially when we do not need to rely on them in any way).
Tolerance is good but trolling is not among ones that are to be tolerated, imo.
legendary
Activity: 1456
Merit: 1174
Always remember the cause!
Why is the thread being derailed by some comments about me? What does anything I did or did not do have to do with the discussion of the technological facts of PoCW.
Really nothing, besides the need to stop you from trolling.

he's been claiming for 5-6 years that he's working on a "blockchain breakthrough"

I challenge you to quote from the past where I extensively made such a claim 5 or 6 years ago.

EDIT: {and a long story about what you have been bout in 5-6 years ago}
In any case, I welcome your ridicule. It motivates me. Please do not stop. And please do report me to @theymos so this account can be banned so I stop wasting time posting on BCT.

Like this. Please ... Just put and end to this if you may. You did something inappropriate, and some objections was made about it. Let it go.

Quote
My technological comments stand on their own merits regardless what ever is done to cut my personal reputation down.


Absolutely not. You questioned the overhead of the validation process for miners in my proposal and I answered it solidly: There is no overhead because there is no I/O involved because the submitted contribution shares have an exactly the same Merkle root that is already evaluated (once, when the Prepared Block has been evaluated by the miner when he decided to contribute to it afterwards).

Only a troller continues with repeating this question over and over and in an aggressive way full of insults and hype.

A decent contributor with good faith, may show up to be doubtful about the predicates like  'there is no I/O' , 'the Merkle tree has not to be validated' , 'the shares enjoy a common Merkle tree' , ... and this time with less confidence about the validity of his position because s/he understands that there is huge possibility for her/his doubts to be removed by the designer of the protocol trivially by posting few references references to the original proposal. Actually it is exactly the case here because all of the three predicates under consideration are absolutely true by the design.

When the doubts cleared to be unnecessary the the discussion can go a step forward. It is no war, there is nothing to conquer other than the truth.



legendary
Activity: 1456
Merit: 1174
Always remember the cause!
Proof of everything other than Work

Annoymint doesn't like the implications of proof of work; he's been claiming for 5-6 years that he's working on a "blockchain breakthrough", but never proves he's working on anything Smiley


@Annoymint, you need to start a new Bitcointalk user called "Proof of everything other than work"

I see, being trapped by his own narration, a very common threat for all of us. I guess we have to do kinda meditation or Zen to avoid or heal.
Pages:
Jump to: