Pages:
Author

Topic: Taproot proposal - page 7. (Read 11253 times)

hero member
Activity: 667
Merit: 1529
June 07, 2021, 02:08:57 AM
Quote
It even makes me think that when time and tech converge along with bitcoin network use we *might* even be able to see something more controversial, like a block size increase, be implemented in the same way.  Again, this would be when consensus was there, and all the stakeholders were ready, etc.  If ever. Just saying.
This is not necessary. I can imagine some future Segwit version with Pedersen Commitments (without hiding amounts to make it small enough). Then, bigger blocks won't be needed if you could deposit your coins to some new address that will allow you to join many transactions together. I think if we have transaction chain A->B->C->...->Z, then there is no need to increase block size. What really is needed is storing A->Z transaction and make it resistant for double-spending attempts.

I wonder if joining transactions could be done with taproot. So far, I have no idea how to make it in a safe way, but if it could be reduced to ECDSA keys, then they could be joined with Schnorr signatures, so maybe there is some way to do so.
copper member
Activity: 1610
Merit: 1899
Amazon Prime Member #7
June 07, 2021, 12:47:41 AM
so because taproot devs are not playing these flip flop false promise backstabbing games this year. they are not getting any drama towards taproot
Wrong. The lack of "drama" is simply because unlike 2017 there is nothing to be gained from that drama. For example there is no shitcoin like bcash to be created to make money.
When SW was being proposed, there was a community debate if bitcoin should be hard forked to allow for a max block size increase. SW was an alternative to increasing the maximum block size via the use of signature weights.

There is no alternative proposals to Taproot, and there is no serious technical opposition to it either, that I am aware of. I suspect the delay in signaling support for Taproot by the miners is due to them reviewing the code, running their own tests, and checking for security vulnerabilities.
legendary
Activity: 3430
Merit: 10505
June 06, 2021, 11:31:14 PM
We have proven that not only can the network be upgraded, but it can be done in a clear, orderly, quick fashion!
To be fair we had proven this half a dozen times (maybe more) before the 2017 fork when each upgrade went through smoothly and without any drama or at least very little. BIP16, BIP34, BIP65, BIP66, BIP112 are some of these soft forks that I can think of.

so because taproot devs are not playing these flip flop false promise backstabbing games this year. they are not getting any drama towards taproot
Wrong. The lack of "drama" is simply because unlike 2017 there is nothing to be gained from that drama. For example there is no shitcoin like bcash to be created to make money.
copper member
Activity: 37
Merit: 14
June 06, 2021, 09:02:30 PM
MARA Pool has signaled for two blocks   Smiley
staff
Activity: 4158
Merit: 8382
June 06, 2021, 07:24:50 PM
segwit did hurt legacy users. eg: (WITNESS_SCALE_FACTOR = 4;) means legacy tx became more expensive*
That isn't true. Segwit increased capacity and as a result lowered fees for everyone, maybe not as much as you would like but that doesn't justify spreading hateful misinformation.
legendary
Activity: 4186
Merit: 4385
June 06, 2021, 05:32:53 PM
taproot did not impose any force measure.
there were some issues in regards to segwit. such as that it 'increase capacity 2.4x' and 'cheap fee pledges'*, which were obvious empty/false pledges and those 2 things were the 2 things people actually wanted at the time

taproot is different. its not really over promising anything
its not pretending to offer one thing but offer something else
its not attempting to force itself

so there is no controversy or resistance

taproot does not really hurt legacy/native users. if people want to use taproot they first have to move funds into taproot addresses.
segwit did hurt legacy users. eg: (WITNESS_SCALE_FACTOR = 4;) means legacy tx became more expensive*

so because taproot devs are not playing these flip flop false promise backstabbing games this year. they are not getting any drama towards taproot

*to name a few

EDIT to answer gmax below....


pre segwit activation blocks reached peak of 2.5k average tx per block..
post segwit activation blocks reached a peak of 2.7k average tx per block..check all time graph
so gmax misinforms about a +10% coupleday peak.. but 0% 3 year average change as his answer to 'cheap'
but does not acknowledge the 4x premium his friends added to the code that disadvantage 60% of users

witness scale factor is used to make legacy transactions 4x more than segwit
everyone knows that a hard data real byte of legacy is trapped into being recognised as a more bloated 'vbyte of 4x its actual hard drive stored size
its the very reason while the blocks can be 4mb bitcoin code can only store 1mb of legacy per block
search github and you will find this x4 premium in many examples of * WITNESS_SCALE_FACTOR
1, 2, 3, 4, 5, 6

theres more but i limited myself to spending just 20 seconds in debunking gmax. anyone can find these multipliers in 20 seconds.. even gmax
segwit dies not discount transactions. it just avoids a premium.
and not by just a segwit activation giving all bitcoin users a discount/ avoid premium fee.
but only those specifically having a segwit UTXO get this avoid premium fee

gmax. atleast be honest about how your team decided to play around with the fee structure to disadvantage legacy address users and try to promote/incentivise segwit users to try converting people
legendary
Activity: 3738
Merit: 5127
Whimsical Pants
June 06, 2021, 04:56:27 PM
You know... it's funny.

I am certainly among those who are happy to see Taproot being activated both so smoothly, and quickly!

But I kind of think there is a bigger story in this taproot activation that we are not quite grasping yet.

That was the fact that this has been (please apply JJGs jinx disclaimer here as well) an absolute tour de force for bitcoin governance.  We have proven that not only can the network be upgraded, but it can be done in a clear, orderly, quick fashion!

It even makes me think that when time and tech converge along with bitcoin network use we *might* even be able to see something more controversial, like a block size increase, be implemented in the same way.  Again, this would be when consensus was there, and all the stakeholders were ready, etc.  If ever. Just saying.

In the aftermath of the SegWit activation I think we may have become overly pessimistic about what we may or may not be able to do insofar as network upgrades/tweaks.  The reason I chose the idea of the block size increase for this statement was because truly I did not think we would have a chance of something like that as the protocol began to be more and more set in stone.  Maybe not?!?
staff
Activity: 4158
Merit: 8382
June 06, 2021, 03:39:09 PM
It would be neat if taproot.watch would add a indicator on how low the signaling would have to fall to block it this period.
legendary
Activity: 3696
Merit: 10155
Self-Custody is a right. Say no to"Non-custodial"
June 06, 2021, 02:38:26 PM
I hate to jinx this, but I am going to report anyhow on a seemingly near inevitability that this thingie is getting locked in this period.

Of course, getting the information from https://taproot.watch/

I understand that no one wants to count their chickens before they are hatched, and I am in that same camp.. so I would not be saying anything if I thought that there were some kind of meaningful and material risk that my celebration could be gamed in such a way to show that I am wrong (or to prove me wrong).. because no one is going to give any shits about what I say or think (except maybe my mom).

What I am trying to say is that there has to be some pretty damned extreme changes in miner behavior for this thing NOT to get locked in this difficulty period.. not out of the question.. but still odds?  I am going to say without specifying too much... "quite low".. and that is another reason for this ccccciiiiiiiittttttttteeeeee post.

As I type, we have about 97.85% of the blocks having had already signaled for taproot - more specifically - 1,093 out of 1,117 that have already been mined.

We also have ONLY 899 blocks remaining for the difficulty period and only 722 (which is 80%) have to signal for taproot to reach the 90% threshold.

As I type, I am considering opening up my specially-dedicated champaign right now, and maybe treating myself to an especially-dedicated slurpy too.  #justsaying.. call me premature, if you must.. I don't give no cares.    Tongue Tongue    Wink Wink
legendary
Activity: 4466
Merit: 1798
Linux since 1997 RedHat 4
June 05, 2021, 01:18:59 AM
for the point about pools.
il use one example.
slushpool has 3 servers for btc. and another 3 for zcash
thats 6 different instances separated by physical space so no butting heads.
they may seem like its a single pc becasue of brandname "slushpool" not being plural.
but i guarantee you they have separate machines
You seem to have missed my reply about that. Read it again - the first reply in my post.

Quote
as for saying mining is 2 parts being the asic and the pc connected to a bitcoind or a pool.. thats 2013 era stuff
This is today as well as 8 years ago.
They tend to call it a 'controller' - it's a computer separate from the asics mining, sitting inside the box for some miners and outside for others, connected to the hash boards, usually running linux.

Quote
seems your knitpicking for knitpicks sake.
shame its based on 8+year old scenarios
Everything I've said in my previous post is how it currently works.
The only uncommon part is people solo mining to their own bitcoin, though people still do this for what's called 'lottery mining', but it is indeed a risky venture also.

Quote
as for the bit about 'why would pools not pre-broadcast their private tx before block..
i said most pools do broadcast.. but alas nothing is forcing them.
thus its an exploit that could be used.
previously.. much much more recently than 8+ years ago i have seen pools do just that.
and yes the amount of times nodes have had to request tx's because they are not in their mempools has been a factor in the confirmation process.
it is good practice to broadcast tx's before block has propagated. but its a known thing that it is not a 100% case that good practices are followed
Indeed nothing but logic is forcing them to.
The risk of losing a block, and thus lose a few hundred $k, does help most of them to make that best decision.
It will not be common, to not have a transaction before a block arrives, since the pool who found the block will have seen the transaction before even sending the work to the pool's miners, so although it is indeed possible by accident and not by design, it wont be common.
For well connected pools I'd suggest that it should be rare.
I'd take a guess around 1 in 100 blocks (or better) since 21st Dec Smiley
legendary
Activity: 4186
Merit: 4385
June 04, 2021, 10:39:13 PM
for the point about pools.
il use one example.
slushpool has 3 servers for btc. and another 3 for zcash
thats 6 different instances separated by physical space so no butting heads.
they may seem like its a single pc becasue of brandname "slushpool" not being plural.
but i guarantee you they have separate machines

talking about solo mining is 2012 era stuff
as for saying mining is 2 parts being the asic and the pc connected to a bitcoind or a pool.. thats 2013 era stuff

solomining aint a thing anymore and asics these days dont connect via a PC they just have a network cable direct to a router.

seems your knitpicking for knitpicks sake.
shame its based on 8+year old scenarios

as for the bit about 'why would pools not pre-broadcast their private tx before block..
i said most pools do broadcast.. but alas nothing is forcing them.
thus its an exploit that could be used.
previously.. much much more recently than 8+ years ago i have seen pools do just that.
and yes the amount of times nodes have had to request tx's because they are not in their mempools has been a factor in the confirmation process.
it is good practice to broadcast tx's before block has propagated. but its a known thing that it is not a 100% case that good practices are followed


anyway..
all my points in this post and posts prior of this topic are just to show that while some are worried about chainsplits/orphans and even spv issues to the network(facepalm) solomining issues(double facepam) if/when taproot activates. those risks are not even 1% risk.. not even 0.1 nor 0.01% risk. so relax
legendary
Activity: 4466
Merit: 1798
Linux since 1997 RedHat 4
June 04, 2021, 08:17:02 PM
pools which manage multiple coin use different servers per coin. thus upgrades of one coin dont affect/delay other coin upgrades. (different men in different office.. not banging heads together)
A merged mined (mm) coin requires supporting that in the coinbase of the Bitcoin work sent to the miners.
Thus the Bitcoin work generation requires code changes when supporting any scamcoin or when the scamcoin changes anything related to that.
Each Bitcoin work generation also requires access to the scamcoin so that the mm information in the coinbase is accurate.
Most pools do this (look for a larger coinbase sig with the letters 'mm' in it)

If instead you are talking about a scamcoin being directly mined on a pool, well that's off topic here anyway Smiley

Fibre network/ merchants
Alas the public fibre network is no longer in existence for quite a while.
Any pool or solo miner needs their bitcoin connected directly to, or at most 1 step away from, the large pools.
Each step will slow down the propagation of any blocks you find.
If you want to compete in the Bitcoin world on your own, you need a network configuration able to compete with the large pools.
legendary
Activity: 4186
Merit: 4385
June 04, 2021, 01:58:06 PM
we both know the network layout

its in tiers like:
pools
Fibre network/ merchants
user fullnodes/lite wallet servers/aws servers
spv wallets/phone apps

no top line fullnode in the first 3 categories/tiers will waste one of its peer slots on a small SPV user
they want to be well connected to secure nodes with static IP and have full blockchains.

so by the time blocks have propagated hop by hop down the layers of nodes and reached an SPV. most of the duff blocks have already been rejected by the nodes up the layers and so spv wallets wont even get to see them

spv wallets are for the bottom tier of the network that come and go off/online sporadically even more so than user fullnodes.

its like torrents(analogy)
pools are the seeders.. spv are the leachers
pools wont want leachers attached to them direct.
heck even merchants dont want leachers

SPV leachers rely on the many tiers to have been the buffer of removing unwanted junk.
much like torrent leachers wait until there are many seeds before downloading because they want their seeders before them to have weeded out the files that contain trojans/viruses.

again there would be a risk if a spv was direct peer to a intentional malicious pool. but the way the network has self regulated and chosen its peers with white listing and ban hammers over time.
a SPV wont be found connected to a pool direct. and a merchant wont then be connected to a spv(as spv dont retain block data to pass along)

so there is no pool->spv wallet->merchant scenario
a merchant will never leach from a spv for obvious reasons.
staff
Activity: 4158
Merit: 8382
June 04, 2021, 12:55:15 PM
as its the block source from multiple peers that is the extra mitigating factor that avoids following the wrong blocks for too long
I'm aware of no SPV wallet that looks to multiple sources, and it would be almost certantly a waste of time to do so--- it's cheap and easy for an attacker to outnumber the honest hosts on the network (that's part of why it's used for consensus!), if an attacker can be one of your peers he can be multiple without much more difficulty.

SPV wallets are strongly predicated that miners validated the block and are honest, to that extent that isn't true, SPV is just not particularly secure.
legendary
Activity: 4186
Merit: 4385
June 04, 2021, 03:38:20 AM
just to clarify many above posts

pools= servers that collate the transactions and make blocktemplates and thus the hash to give asics
miners=asics that never see a tx. they only see a hash(+nonce range +difficulty) and send back a new hashsolution meeting a difficulty threshold.

pools arnt miners. miners arnt pools
pools which manage multiple coin use different servers per coin. thus upgrades of one coin dont affect/delay other coin upgrades. (different men in different office.. not banging heads together)

miners(asics) dont send every attempt back to a pool. an asic is provided with hash, nonce range and difficulty target. if they run out of nonce range allocated to them. they just request another nonce range
they wont send a hash back unless it meets the threshold

..
anyways
(although most tx are prevalidated during tx relay prior. some blocks include new tx not pre relayed thus requiring nodes to request the tx during the block checks.. its actually this tx request delay that extends block validation delay alot of the time(exploitable))

when a blockX is propagated headers first.
the <2min timespan of propagation, analysis difficulty of solved hash, see tx list, see which tx are missing. fetch missing tx, valid fetched tx.. sha the block, make sure it matches the hashs announced earlier.confirm a block.
(i dumbed down the entire process. dont knitpick)

pools have 3 options during this <2min window from header first to confirm/reject result of blockX:
1.([ethically] start a empty block)
   it takes miliseconds to add previous hash, sha template, send hash to asics with nonce range and diff
   hoping that theeir workers can get a block solve in 2minutes or less while pool validates blockX.
   once blockX has been analysed.
   a. if faulty.
       i. return back to their own blockX at last nonce range and finish off their own blockX hash race
       ii. start new blockX from 0nonce again
       iii. if another pool propagated a blockX in that interval start a empty blockY on the other blockX until analysed
   b. if valid
       i. continue with empty blockY
         (usually empty blocks only spotted on network <2mins after previous block)
         (only roughly 280blocks of 58k are empty(2020stats). so its not that often<0.5%)
       ii. start a new blockY including unspent tx *

2.([ethically] continue blockX)
   this is where a pools sees the header but needs time to fetch&analyse tx list and other checks
   so they continue their own blockX
   once propagated blockX has been analysed
   a. if faulty
       i. no fuss. they just continue on with their own blockX effort.(rejecting the propagated blockx)
   b. if valid.
       i. they create new blockY with unspents included*

*(block template takes a few seconds to collate new unspent list, to hash it,send hash to asics with nonce range)
(this is usually the case in 99.5% of pool strategies)
(some pools have a template ready with transactions in. and just remove spents (half filled blocks))
(some pools wait until all spents of prev block are discounted, then make block from scratch(full block))

3. ([intent or ignorant] build on first propagated blockX no matter what)
    this is where a pool sees the header and just builds on it no matter the content validity
    this case in regards to TR activation is (after activation) the 5% pools not fully validating
    a. if faulty.
       i. they build on faulty block X with their blockY.
         if they solve blockY first. the 95% of pools would orphan blockY because they already rejected blockX
         (orphan because child is rejected by society because parent is lost)
       ii. continue with a blockZ even though 95% already orphaned blockY = pool created its own altcoin
          which 95% pools will keep orphaning off until ban hammering nodes propagating such blocks
    b. if valid
        i. pool is part of the network for now. but just doesnt know what is fully in the blockX

..
the chance of a orphan heavy scenario due to 3.a.i&ii is not just under 5%. but also under 0.5% of that 5%
the only way to up that 0.0025% chance is if the pool in question has more then 5% hashpower.. and on top of that they are intentionally making faulty blocks. and ontop of they willing to build on those blocks
.. but over all its like a 0.0025% risk generally
so dont be too worried about orphan drama.

but as i said the only real risk is merchants accepting payment with 0-low confirms from single blockchain source peer. as its the block source from multiple peers that is the extra mitigating factor that avoids following the wrong blocks for too long
staff
Activity: 4158
Merit: 8382
June 03, 2021, 06:47:43 PM
Or is issue that those patches are proprietary and what gives them their competitive advantage.
It's mostly not patches but external mining software. Some of the less good things they do are done because thats whats easy to do outside of the daemon. Not so fun to reimplement consensus rules, so why bother?

The viabtc mining code is public and you can see many such approximations.  E.g. they decide if an incoming block meets the target by the deriving a number of leading zeros off the floating point difficult number rather than an accurate target test.  So if you make a block with a lot of leading zeros but still doesn't meet the target, that code will temporarily switch to it and mine children that will be ignored by every bitcoin node an wallet.   There is no meaningful computational cost in doing the test accurately-- it's just more implementation effort for an external codebase. The fact that it's fairly easy to attack (mining hardware ordinarily returns sub target work, so it would be easy enough to filter that for work that met the relaxed test but not the real target... Presumably they'd fix it once they noticed they lost blocks to it.

Quote
I guess since the pools are operating off 2.5% margin on pps - it magnifies by a factor of 40 P&L impact of orphans.
One thing to consider is that 2.5% PPS is pretty much guaranteed to go bankrupt per kelly criterion with a reasonable bankroll.  Computationally bounded rationality can be a problem when your system's assumptions depend on rational self interest. Tongue

One place where Bitcoin suffers is that it's very stable, and you can keep running old versions. As a result multicryptocurrency pools spend a lot of attention on trashfire altcoins that needs constant emergency fixes, and less on Bitcoin. There have been problems with stuff like newer bitcoin needing a compiler no less than 3 years old but some large miners have much older operating systems.  I don't think there is any real fix except increasing the rate of change-- it's a point I've raised in disagreements that argue that Bitcoin's conservative approach is safer. To a point that's true, but there is a point where being too slow to introduce changes introduces its own risks.

Hopefully taproot marks the end of a one time dry spell and with renewed confidence that clear improvements can be activated without disruption more will be written.
sr. member
Activity: 438
Merit: 291
June 03, 2021, 03:55:09 PM
gmaxwell - do you think that many pools are reliant on patched bitcoind nodes to maximise their profits (since most are pps - it is their profit not the miners that is impacted by orphans/empty block trade-off).

Those 20 pools are arguably the most critical users of bitcoin core. And should it not include every possible optimisation (even if at expense of SPV) to ensure they run vanilla code. As that then ensures that future soft/hard forks go smoothly as those 20 users just have to do an upgrade from version 2x ->2y - rather than having to try to merge in some old patch and try to test it works as expected.

Or is issue that those patches are proprietary and what gives them their competitive advantage.

I guess since the pools are operating off 2.5% margin on pps - it magnifies by a factor of 40 P&L impact of orphans.
staff
Activity: 4158
Merit: 8382
June 03, 2021, 11:09:44 AM
can't stop hashing while it waits for bitcoind to do its verification
Actual validation typically takes under 50 milliseconds or so-- and even faster on hardware with sha-ni--, typically because almost all the validation is cached. During that amount of time you might as well stay on the prior block because you have a non-zero chance of winning on the prior block anyways in a race, and not including transactions will greatly reduce your income.

But the validation isn't the only, or largest, source of delay.  E.g. constructing a block template takes time.  Miners could instead always mine on already validated tips, but include a very suboptimal block (including empty) on the initial work they generate on a new block, but that would take more work to implement than just monitoring other miners and mining whatever other header they mine on.  It also avoids worrying about various other block propagation DOS vectors.   But it really trashes the security assumptions of SPV pretty bad.

It's kind of a circus, eventually some miner going to do that is going to get a SPV user robbed, and since all the miners are helpfully identifying themselves, they're going to get sued for damages by the robbed user and it's going to turn into a complete circus.  Years ago I talked a major exchange out of attempting to sue miners that mined txns which weren't the same ones the exchange saw first... took some real effort to get them to understand that just because they saw the other spend first that didn't mean the miner did.

Fortunately, you can opt out of the exposure by running your own full node and keeping it up to date, or by treating high value transactions from untrusted parties as unconfirmed until they have a dozen confirmations or so.
legendary
Activity: 1568
Merit: 6660
bitcoincleanup.com / bitmixlist.org
June 03, 2021, 08:04:38 AM
The issue being how tightly is the pool work generation tied into a full bitcoin daemon that would reject both of these blocks?

Given that pretty much all the pools are using stratum that means this is a question of how often the stratum servers are querying bitcoind. Mining hardware can't stop hashing while it waits for bitcoind to do its verification because it doesn't even know when and at what rate it's running.

If this next block is valid, the questions follows of what do all these pools do when they see this valid block built on top of an invalid block?

History shows that they just continue on when this happened once before.

If a pool submits a block immediately without doing any verification, then eventually when a split happens and the pool's chain is outrun then its mining rewards will be invalidated. And if a pool risks verifying before submitting the block then another pool might submit a block before them and they lose out. So there isn't really a benefit for pool to push for either case, so naturally, most go for the short-term route of submitting the block as quickly as possible, which means that there's always going to be cases of invalid blocks sneaking into the chain at least temporarily.
legendary
Activity: 4466
Merit: 1798
Linux since 1997 RedHat 4
June 02, 2021, 07:37:55 PM
Alas there is a bigger issue, that the large majority of the mining hash rate doesn't verify any transactions when they first generate work after a block change.

Since this small window exists, there is the quite reasonable chance of another pool finding a block on top of an invalid block.
If this next block is valid, the questions follows of what do all these pools do when they see this valid block built on top of an invalid block?

History shows that they just continue on when this happened once before.
The issue being how tightly is the pool work generation tied into a full bitcoin daemon that would reject both of these blocks?
... and unless every large pools just copies code from every other large pool, there will no doubt be many different answers to this question.
Pages:
Jump to: