I was told by gmax himself that a node that doesnt validate all signatures should call itself a fully validating node.
As long as it fully validates all of the NEW blocks and transactions that it receives. HISTORICAL blocks and the transactions within them are not validated because they are HISTORICAL and are tens of thousands of blocks deep.
Also, I am making an optimized bitcoin core and one of these optimizations is rejecting a tx whose contents doesnt match the txid. The thinking being that if the hashes dont match, there is no point in wasting time calculating the signature
not sure what libsecp256k1's speed has anything to do with the fact that it is still much slower to calculate than SHA256.
And how are you checking the txids if they are not provided? A tx message can be sent unsolicited with a new transaction and it does not contain the txid. In fact, there is no network message that I could find that sends a transaction with its txid. Of course, I think it is safe to assume that if a node requested a specific transaction that it would check the hash of the data it received so that it knows whether that data is correct. But for unsolicited transactions, then the only way to verify them is to check the signature.
So my point again, is that all witness data needs to be stored permanently for a full node that RELAYS historical blocks to a bootstrapping node. If we are to lose this, then we might as well make bitcoin PoS as that is the one weakness for PoS vs PoW. So if you are saying that we need to view bitcoin as fully SPV all the time with PoS level security for bootstrapping nodes, ok, with those assumptions lots and lots of space is saved.
No, when bootstrapping historical blocks the witness data is not required because it doesn't need to validate historical blocks. See above.
However, with such drastic assumptions I can (and have) already saved lots more space without adding a giant amount of new protocol and processing.
So this controversy has at least clarified that segwit INCREASES the size of the permanently needed data for fully validating and relaying node. Of course for SPV nodes things are much improved, but my discussion is not about SPV nodes.
So the powers that be can call me whatever names they want. I still claim that:
N + 2*numtx + numvins > N
And as such segwit as way to save permanent blockchain space is an invalid claim.Now the cost of 2*numtx+numvins is not that big, so maybe it is worth the cost for all the benefits we get.
However on the benefits claims, one of them is the utxo dataset is becoming a lot more manageable. this is irrelevant as that is a local inefficiency that can be optimized without any external effects. I have it down to 4 bytes of RAM per utxo, but I could make it smaller if needed
It just seems a lot of unsupported (or plain wrong) claims are made to justify the segwit softfork. And the most massive change by far is being slipped in as a minor softfork update?
If you are going to run your node from now until the end of time continuously and save all of the data relevant to the blocks and transactions that it receives and call of that data "permanent blockchain data", then yes, I think it does require more storage than a simple 2 Mb fork.
Since when has anyone ever claimed that segwit is "a way to save permanent blockchain space"?
What I still dont understand is how things will work when a segwit tx is sent to a non-segwit node and that is spent to another non-segwit node. How will the existing wallets deal with that?
Since you keep saying stuff about sending transactions between nodes, I don't think you understand how Bitcoin transactions work. It isn't sending between things but creating outputs from inputs after proving that the transaction creator can spend from those inputs. The inputs of a transaction don't affect the outputs of a transaction except for the amounts.
A transaction that spends a segwit input can still create a p2pkh and p2pk output which current nodes and wallets understand. p2pkh and p2pk are two output types that wallets currently understand. Those p2pkh and p2pk outputs can be spent from just like every other p2pkh and p2pk output is now. That will not change. The inputs and the scriptsigs of spending from those outputs will be the exact same as they are today. Segwit doesn't change that.
Rather segwit spends to a special script called a witness program. This script becomes a p2sh address, another output type which current wallets know about and can spend to.
Segwit wallets would instead always create p2sh addresses because that is the only way that segwit can implement witness programs to be backwards compatible. Those p2sh addresses are distributed normally but can only be spent from with a witness program.
What happens if an attacker created segwit rawtransactions and sent them to non-segwit nodes? there are no attack vectors?
Then the attacker is just sending the owner of an address a bunch of Bitcoin. If it is a bunch of spam outputs, then it can be annoying, but that is something that people can already do today.
what about in zeroconf environments? how does a full relaying node mine a block with segwit inputs? or do existing full nodes cease to be able to mine blocks after segwit softfork?
Well firstly, full nodes don't mine blocks.
The data that composes the block is the data that currently comprises of a block. The header is the same. The Coinbase transaction just has the OP_RETURN output to add the witness root to the blockchain. The transactions are the transactions with the current format. If a block is requested by another node that wants the witness data, then the block is sent with the transactions serialized in the witness serialization format.
And even a simpleton like me can understand how to increase blocksizes with a hardfork, so why not do that before adding massive new changes like segwit? especially since it is more space efficient and not prone to misunderstandings
And in the future, what is to say that simpletons will be able to understand segwit? In the future, someone would still be saying that segwit is too complicated and that we should not use it. In the future it will still be large changes and it will still be prone to misunderstandings. Nothing will change in the future except instead of increasing the block size limit from 1 Mb to 2 Mb, they will be clamoring to increase the block size limit from 2 Mb to 4 Mb. The situation would literally be the same.
If that was you asking in #bitcoin-dev earlier, you need to wait around a bit for an answer on IRC-- I went to answer but the person who asked was gone. BIPs are living documents and will be periodically updated as the functionality evolves. I thought they were currently up to date but haven't checked recently; make sure to look for pull reqs against them that haven't been merged yet.
Yeah, I asked on #bitcoin-core-dev as achow101 (I go by achow101 pretty much everywhere else except here, although I am also achow101 here). I logged off of IRC because I went to sleep, probably should have asked it earlier.
I will look at the BIP pulls and see if there is anything there.
A question that still niggles me is segwit as a soft fork. I know that just dredges up the same old discussion about pros and cons of soft vs hard but for a simpleton such as me it seems that if the benefits of segwit are so clear, then compromising on the elegance of implementation in order to make it a soft fork seems a strange decision.
It was originally proposed as a hard fork, but someone (luke-jr I think) pointed out that it could be done as a soft fork. Soft forks are preferred because they are backwards compatible. In this case, the backwards compatibility is that if you run non-upgraded software, you can continue as you were and have no ill effect. You just won't be able to take advantage of the new functionality provided by segwit.
Alternatively, if this were done as a hard fork, then everyone would be required to upgrade in order to deploy segwit and then that would essentially force everyone to use segwit.