Pages:
Author

Topic: Segregated witness - The solution to Scalability (short term)? - page 6. (Read 23170 times)

legendary
Activity: 1386
Merit: 1009
Sorry I am slow.  What won't be as efficient?  I guess I don't understand the problem exactly.

https://blog.bitgo.com/malevolent-malleability/  <-- I see here a list of how folks who didn't know what they were doing could make mistakes, and a conclusion: "The general consensus up to this point has been that malleability is an annoying but not critical system-wide problem. "

Anyway, if for some reason you can't figure out a way to actually check that funds have been sent and arrived at the proper address, or to not reference any TXs by number, you can always pay a miner a few extra milliies to put your TX with exactly the bit order you like into the block just where you want it.  

The whitepaper details it a bit more. In laymans terms, tx malleability allows interesting and complex attack vectors on non-confirmed transactions. Since the Lightning network is a caching layer where contracts are made between lightning nodes before confirmations appear than one has to assume all implications in a hostile environment. Fixing malleability allows for decentralized and untrusted parties to cache these tx's. In order to cache tx with malleability than one has to have centralized sources of trust which basically amounts to a coinbase/circle model of off the chain txs. Centralized off the chain solutions do provide a valuable service to our ecosystem but have heavy inherent regulatory, human , and insurance overhead. If bitcoin is to compete with other payment systems and fullfil its true vision it must eliminate these inefficient and corruptible sources of security.

It seems that's the main reason here, so let's drop Lightning Network

If a clearing based solution, require so much change to bitcoin's architecture, then it must be able to provide lots of benefit to worth the risk. Currently I don't really see a big difference between lightning network and traditional clearing solutions, which require no changes for bitcoin at all

BTW, I just heard that Adam Back said that you need insurance for lightning network to work properly (https://www.reddit.com/r/bitcoinxt/comments/3wty7s/dr_adam_back_believes_that_insurance_may_be/)

Ok, if lightning network need insurance to work, and traditional clearing solution also works perfect given insurance, then why don't just use existing mature clearing based solutions? I thought the biggest benefit Lightning network has against traditional clearing solution is that it requires no trust, but it seems not the case. If you need insurance to be trustworthy, then there must be some fundamental weakness with the design of lightning network. I have not looked into details about this statement, too low signal to noise ratio there, but it is very natural that LN like any new system have many security problem which only time will tell if it is a robust design

In my not-so-deep understanding of LN, they are using a similar design as NashX exchange's mutually assured destruction model to keep it trustless, however, that model does not work well under certain circumstances. That's also the reason those so called P2P exchanges can not gain any momentum against localbitcoins: You eventually need an authority to solve a complex dispute, blockchain can not be this authority since it lacks judgement

Ok, everyone can go home and sleep, no work needs to be done, bitcoin is perfect, just raise the block size limit to 2MB for the time being  Cheesy

With that kind of attitude you might have just continued to use fiat instead of Bitcoin. But you've managed to comprehend Bitcoin. You can do the same with segwit or LN. It just means you must spend time on that, and refrain from making judgements until you're finished.

Mark Friedenbach: https://www.reddit.com/r/btc/comments/3woin3/to_adam_back_we_are_hereby_officially_requesting/cxzpcpw
Quote
There is absolutely no reason for lightning nodes to require insurance or even reputation. The block chain can be used to settle all disputes, with the non-cooperative party paying fees.
legendary
Activity: 1988
Merit: 1012
Beyond Imagination
Sorry I am slow.  What won't be as efficient?  I guess I don't understand the problem exactly.

https://blog.bitgo.com/malevolent-malleability/  <-- I see here a list of how folks who didn't know what they were doing could make mistakes, and a conclusion: "The general consensus up to this point has been that malleability is an annoying but not critical system-wide problem. "

Anyway, if for some reason you can't figure out a way to actually check that funds have been sent and arrived at the proper address, or to not reference any TXs by number, you can always pay a miner a few extra milliies to put your TX with exactly the bit order you like into the block just where you want it.  

The whitepaper details it a bit more. In laymans terms, tx malleability allows interesting and complex attack vectors on non-confirmed transactions. Since the Lightning network is a caching layer where contracts are made between lightning nodes before confirmations appear than one has to assume all implications in a hostile environment. Fixing malleability allows for decentralized and untrusted parties to cache these tx's. In order to cache tx with malleability than one has to have centralized sources of trust which basically amounts to a coinbase/circle model of off the chain txs. Centralized off the chain solutions do provide a valuable service to our ecosystem but have heavy inherent regulatory, human , and insurance overhead. If bitcoin is to compete with other payment systems and fullfil its true vision it must eliminate these inefficient and corruptible sources of security.

It seems that's the main reason here, so let's drop Lightning Network

If a clearing based solution, require so much change to bitcoin's architecture, then it must be able to provide lots of benefit to worth the risk. Currently I don't really see a big difference between lightning network and traditional clearing solutions, which require no changes for bitcoin at all

BTW, I just heard that Adam Back said that you need insurance for lightning network to work properly (https://www.reddit.com/r/bitcoinxt/comments/3wty7s/dr_adam_back_believes_that_insurance_may_be/)

Ok, if lightning network need insurance to work, and traditional clearing solution also works perfect given insurance, then why don't just use existing mature clearing based solutions? I thought the biggest benefit Lightning network has against traditional clearing solution is that it requires no trust, but it seems not the case. If you need insurance to be trustworthy, then there must be some fundamental weakness with the design of lightning network. I have not looked into details about this statement, too low signal to noise ratio there, but it is very natural that LN like any new system have many security problem which only time will tell if it is a robust design

In my not-so-deep understanding of LN, they are using a similar design as NashX exchange's mutually assured destruction model to keep it trustless, however, that model does not work well under certain circumstances. That's also the reason those so called P2P exchanges can not gain any momentum against localbitcoins: You eventually need an authority to solve a complex dispute, blockchain can not be this authority since it lacks judgement

Ok, everyone can go home and sleep, no work needs to be done, bitcoin is perfect, just raise the block size limit to 2MB for the time being  Cheesy


legendary
Activity: 994
Merit: 1035
Sorry I am slow.  What won't be as efficient?  I guess I don't understand the problem exactly.

https://blog.bitgo.com/malevolent-malleability/  <-- I see here a list of how folks who didn't know what they were doing could make mistakes, and a conclusion: "The general consensus up to this point has been that malleability is an annoying but not critical system-wide problem. "

Anyway, if for some reason you can't figure out a way to actually check that funds have been sent and arrived at the proper address, or to not reference any TXs by number, you can always pay a miner a few extra milliies to put your TX with exactly the bit order you like into the block just where you want it.  

The whitepaper details it a bit more. In laymans terms, tx malleability allows interesting and complex attack vectors on non-confirmed transactions. Since the Lightning network is a caching layer where contracts are made between lightning nodes before confirmations appear than one has to assume all implications in a hostile environment. Fixing malleability allows for decentralized and untrusted parties to cache these tx's. In order to cache tx with malleability than one has to have centralized sources of trust which basically amounts to a coinbase/circle model of off the chain txs. Centralized off the chain solutions do provide a valuable service to our ecosystem but have heavy inherent regulatory, human , and insurance overhead. If bitcoin is to compete with other payment systems and fullfil its true vision it must eliminate these inefficient and corruptible sources of security.
legendary
Activity: 1066
Merit: 1050
Khazad ai-menu!

I'm not sure that's the case, but of course I could be wrong.  Can't you build it to actually check receipt of funds?  I mean, that should in theory be the important part of making a payment anyway, rather than thepayment ID.  Please let me know what I am missing here.    



Lightning Network can work just fine with tx Malleability but wont be as efficient and cannot scale as long as it remains. This is why solving it is one of the requirements to proceed with the project.

https://blog.bitgo.com/malevolent-malleability/
https://lightning.network/lightning-network-paper-DRAFT-0.5.pdf


Sorry I am slow.  What won't be as efficient?  I guess I don't understand the problem exactly.

https://blog.bitgo.com/malevolent-malleability/  <-- I see here a list of how folks who didn't know what they were doing could make mistakes, and a conclusion: "The general consensus up to this point has been that malleability is an annoying but not critical system-wide problem. "

Anyway, if for some reason you can't figure out a way to actually check that funds have been sent and arrived at the proper address, or to not reference any TXs by number, you can always pay a miner a few extra milliies to put your TX with exactly the bit order you like into the block just where you want it. 





legendary
Activity: 994
Merit: 1035

I'm not sure that's the case, but of course I could be wrong.  Can't you build it to actually check receipt of funds?  I mean, that should in theory be the important part of making a payment anyway, rather than thepayment ID.  Please let me know what I am missing here.    



Lightning Network can work just fine with tx Malleability but wont be as efficient and cannot scale as long as it remains. This is why solving it is one of the requirements to proceed with the project.

https://blog.bitgo.com/malevolent-malleability/
https://lightning.network/lightning-network-paper-DRAFT-0.5.pdf
legendary
Activity: 1066
Merit: 1050
Khazad ai-menu!
1) Malleability of TXID has never been an issue or a problem
For you? Because if we're to implement full Lightning Network and possibly other sophisticated solutions, we need malleability solved.

I'm not sure that's the case, but of course I could be wrong.  Can't you build it to actually check receipt of funds?  I mean, that should in theory be the important part of making a payment anyway, rather than thepayment ID.  Please let me know what I am missing here.     
 

legendary
Activity: 1386
Merit: 1009
1) Malleability of TXID has never been an issue or a problem
For you? Because if we're to implement full Lightning Network and possibly other sophisticated solutions, we need malleability solved.
hero member
Activity: 644
Merit: 504
Bitcoin replaces central, not commercial, banks

It makes no sense to stop right there. You can't just split a transaction without it losing its integrity, because we need the whole transaction to calculate its txid. With SW we can do that by introducing a new consensus rule (witness merkle hash), and now we will have two hashes representing a transaction: txid and witness hash. The benefit is that only txid is used as a reference in the blockchain, and this way we solve the malleability issue introduced by scriptSig.


Thanks for your reply.  Well certainly we don't want to lose integrity!  That is important emphasis thank you.  

One can split data apart, move it around, stack it in different piles, but if one has thrown out the instructions for how to put it back.. well, one is royally fucked at that point or at least out the original data.

So a couple things come to mind:

1) Malleability of TXID has never been an issue or a problem

2) The fundamental data we need for integrity is still the fundamental data we need for integrity.  Nothing has really changed.  If you come up with a clever way to store data that helps miners be efficient, that's great.  Not that any of them would bother looking here anyway for such ideas.  Nor would they care if pull requests are "approved".  Just grab the code and run it if you like.  

Here? No but they certainly look up to the Core devs, right or wrong.

"Just grab the code and run it" is pretty much the definition of a miners enforced soft fork, no?
legendary
Activity: 1066
Merit: 1050
Khazad ai-menu!

It makes no sense to stop right there. You can't just split a transaction without it losing its integrity, because we need the whole transaction to calculate its txid. With SW we can do that by introducing a new consensus rule (witness merkle hash), and now we will have two hashes representing a transaction: txid and witness hash. The benefit is that only txid is used as a reference in the blockchain, and this way we solve the malleability issue introduced by scriptSig.


Thanks for your reply.  Well certainly we don't want to lose integrity!  That is important emphasis thank you.  

One can split data apart, move it around, stack it in different piles, but if one has thrown out the instructions for how to put it back.. well, one is royally fucked at that point or at least out the original data.

So a couple things come to mind:

1) Malleability of TXID has never been an issue or a problem

2) The fundamental data we need for integrity is still the fundamental data we need for integrity.  Nothing has really changed.  If you come up with a clever way to store data that helps miners be efficient, that's great.  Not that any of them would bother looking here anyway for such ideas.  Nor would they care if pull requests are "approved".  Just grab the code and run it if you like.  
    
legendary
Activity: 1386
Merit: 1009
after all its not like her copies of the receipts can be of any use, especially if i have to be the one to tell her that one of the receipts has an error.. thus there is no point her handing the receipts onto anyone else either. as she would need to inform them of issues too and they would then need to check with me. which would get annoying
You are painting it as if you would need to manually do all this stuff. It's all casually done by the software, you likely won't notice any changes.

You are arguing against a semi-full-node architecture that is better than SPV. Why would you do that? Are you against SPV as well? Do you understand flaws in its current design? Don't you see how it could get much better with segwit? Do you think we should all run full nodes? Do you think it's realistic?
legendary
Activity: 1386
Merit: 1009
Also a good summary by StephenM347
http://bitcoin.stackexchange.com/a/41771
Quote
Segregated witness splits up transactions into different parts that can be handled separately instead of the single chunk of data as they are now.

Wait a minute.  Lets just stop right there shall we?  Because I was under the impression that a transaction, and in fact any information at all, could already be divided up in to ones and zeros and handled any way we liked.  

Isn't a "single chunk of data" actually a bunch of ones and zeros that any client, miner, observer, bot, or individual, and do whatever they want with - put in tables, trees, hash them, store them.. whatever?  That is, already?  

What am I missing here?  How is this a thing to be discussed or debated?  Go, split your transactions up any way you like.  Nobody is stopping you.  

Personally I have some code that only keeps the first 20 base58 digits of btc addresses, dropping the rest, to save space.  I didn't make a post on a forum about it or a talk at a conference though.  You don't have to ask permission to manipulate public data.  
It makes no sense to stop right there. You can't just split a transaction without it losing its integrity, because we need the whole transaction to calculate its txid. With SW we can do that by introducing a new consensus rule (witness merkle hash), and now we will have two hashes representing a transaction: txid and witness hash. The benefit is that only txid is used as a reference in the blockchain, and this way we solve the malleability issue introduced by scriptSig.
legendary
Activity: 4410
Merit: 4766
Also a good summary by StephenM347
http://bitcoin.stackexchange.com/a/41771
Quote
Segregated witness splits up transactions into different parts that can be handled separately instead of the single chunk of data as they are now. Specifically, it takes the digital signatures out of transactions and puts them in a separate merkle tree that has the same structure as the transaction merkle tree. So, if fully implemented, to check that an input legally spends its previous output, you would get the signature from the signature tree, instead of the standard scriptSig field.

These are a few of the benefits of this idea:

[1]Since signature data (witness data) is stored outside the transaction (and outside the standard block), it means that that data doesn't have to be counted towards the block size. Pieter Wuille is proposing a 75% discount on space taken up by signature data, meaning that you can fit 4x as much signature data into blocks. This effectively results in a soft fork increase to the block size.
[2]Signatures only prove that a transaction is authorized, it doesn't describe where funds are going or where they came from. So, after they're checked they can be discarded. Putting the signatures in a separate data structure makes it much easier to prune that data, which results in much less blockchain data needing to be stored on your hard drive.

However, this doesn't completely increase the block size, it just increases the amount of signature data that a block can store. Since transactions are roughly 60% made up of signature data, this is still a pretty big gain.

1. imagine you have a pair of pants(fullnode) and in the pockets(chain) you carry receipts for all your spending..
your girlfriend(segwit) loves wearing skinny jeans hates filling her pockets up.. she now wants you to keep the transaction part of the receipt in your left pocket and the part with the shops logo of the receipt in your right pocket.. the pants still weight the same as the total paper equals the same amount but she wants you to only think about the left pocket. and although you pretend the right pocket doesnt exist, you know its still weighing you down..

the only benefit of splitting up the receipts. is so that your girlfriend(segwit) can quickly grab receipts from your left pocket and read data without having to see useless logo's or "thank you for shopping with us" footers. she cant tell where you shopped or if the receipt is real.. but she can see that money has been spent and naively does accounts trusting you.

2. the girlfriend can photocopy(be relayed) the receipts and she can bin the logo part of the receipt.. but the guy in the pants (fullnode) will still need the right pocket of store logo's(signatures), because he needs to be a seeder for anyone else who may ask for the receipts. so they can do proper checks that the receipts came from proper and real shops..

in short. splitting the receipts does not benefit fullnodes and only benefits lazy lite clients. that can easily split the chain themselves by deciding what details they dont want to save to file.

yes some say this means the left pocket is only 40% full, but with each new receipt, part goes into the left part goes into the right.. meaning the pants still get just as heavy as they would if they just made bigger pants.. instead of messing with the pockets.

so now the guy has to not only cut up the transactions to put into each pocket. not only give the girlfriend the 40% half copy of the receipt so they can fit into her smaller skinny jeans.. but if she questions where the receipt comes from, she questions the guy and he then has to check the other pocket. read which store it came from and tell her if its legit. causing more arguments (bandwidth bloat)

and also the shop logo receipt part(signature), the guy has to write in the receipt number and timestamp(hash) to it so that he can link the parts together (data bloat by having extra data on 2 parts instead of 1 data on one part

id much prefer a girlfriend who didnt ask me to re-arrange my pants, and that girlfriend simply didnt wear any pants at all and simply asked me about a particular receipt when she needs to know it.. rather then asking for every receipt for the last 7 years cut up and handed to her. where in 99.99% of the time she wont need to look at as the receipts in only 0.01% has relevance to her.

after all its not like her copies of the receipts can be of any use, especially if i have to be the one to tell her that one of the receipts has an error.. thus there is no point her handing the receipts onto anyone else either. as she would need to inform them of issues too and they would then need to check with me. which would get annoying
hero member
Activity: 718
Merit: 545
I think SegWit brings up an interesting point.

Namely - What data is essential to Bitcoin ?

What do we need to store and what can we through away ?

Which bits matter for the security and integrity of the system ?

..

I'm of the opinion that the inputs and outputs could also be put into an external disposable merkle tree (maybe the same one..).

The only bits that matter are the block headers, to show the POW, and the UTXO set. (this would probably aid anonymity as well)

It would of course require storing the UTXO in some fashion, ( the root hash of that tree would be mined into the block headers ) but certainly not beyond the capabilities of 'bitcoin developers'..  Wink

Those are the only bits that matter IMHO and then we could seriously talk about shrinking bitcoin 100 fold (in storage requirements).
legendary
Activity: 1066
Merit: 1050
Khazad ai-menu!
Also a good summary by StephenM347
http://bitcoin.stackexchange.com/a/41771
Quote
Segregated witness splits up transactions into different parts that can be handled separately instead of the single chunk of data as they are now.

Wait a minute.  Lets just stop right there shall we?  Because I was under the impression that a transaction, and in fact any information at all, could already be divided up in to ones and zeros and handled any way we liked. 

Isn't a "single chunk of data" actually a bunch of ones and zeros that any client, miner, observer, bot, or individual, and do whatever they want with - put in tables, trees, hash them, store them.. whatever?  That is, already? 

What am I missing here?  How is this a thing to be discussed or debated?  Go, split your transactions up any way you like.  Nobody is stopping you. 

Personally I have some code that only keeps the first 20 base58 digits of btc addresses, dropping the rest, to save space.  I didn't make a post on a forum about it or a talk at a conference though.  You don't have to ask permission to manipulate public data. 

legendary
Activity: 1386
Merit: 1009
Also a good summary by StephenM347
http://bitcoin.stackexchange.com/a/41771
Quote
Segregated witness splits up transactions into different parts that can be handled separately instead of the single chunk of data as they are now. Specifically, it takes the digital signatures out of transactions and puts them in a separate merkle tree that has the same structure as the transaction merkle tree. So, if fully implemented, to check that an input legally spends its previous output, you would get the signature from the signature tree, instead of the standard scriptSig field.

These are a few of the benefits of this idea:

  • Since signature data (witness data) is stored outside the transaction (and outside the standard block), it means that that data doesn't have to be counted towards the block size. Pieter Wuille is proposing a 75% discount on space taken up by signature data, meaning that you can fit 4x as much signature data into blocks. This effectively results in a soft fork increase to the block size.
  • Completely solves malleability issues. Using transactions with the signature data outside the transaction means that TXIDs don't hash the signature data, which means that they're not malleable (assuming you're using the standard SIGHASH flag). Technically, the signatures are still malleable, it's just that modifying them doesn't invalidate chains of transactions because the signatures don't sign the modifiable parts.
  • Allows for a slow upgrade. Software has to opt in to using Segregated Witness after it has been fully deployed to the network, but in the meantime (and afterward) transactions can still be made as usual without segregated witness.
  • All future Script updates become soft forks. When segregated witness gets fully implemented, it will have a version byte in outputs for what version of Script it is using. And the behaviour for clients that see a script with a non recognized version number is that they treat it as an 'anyone can spend' output.
  • Signatures only prove that a transaction is authorized, it doesn't describe where funds are going or where they came from. So, after they're checked they can be discarded. Putting the signatures in a separate data structure makes it much easier to prune that data, which results in much less blockchain data needing to be stored on your hard drive.

However, this doesn't completely increase the block size, it just increases the amount of signature data that a block can store. Since transactions are roughly 60% made up of signature data, this is still a pretty big gain.
legendary
Activity: 994
Merit: 1035
https://www.youtube.com/watch?v=NOYNZB5BCHM

Peter Wuille: Segregated witness and its impact on scalability at SF Meetup with more time for questions and clarifications.
legendary
Activity: 1988
Merit: 1012
Beyond Imagination

I remember one of the core dev said a few month ago that we should wait until the block become full and see how the situation develop and then start to apply measures accordingly, I still think this is a good approach. People won't die because the banks are closed during week end, similarly, if the block become too congested, they will just reduce the transaction frequency and plan their transaction accordingly,


Yes, they can use Litecoin, Viacoin, Dogecoin, Monero, instead of Bitcoin, where the stream is blocked.
This seems to be a great strategy of the core devs. The Altcoiners should applaud it, and they do.

When your banks are closed during weekend, do you use Chinese RMB because their banks works during weekend? If you are afraid of being taken over by alt-coins, get some alt-coins, just in case  Wink

As statistics show, most of the users don't spend their bitcoin, simply because they will spend depreciating fiat money first and hold bitcoin to protect them from inflation. If you have $4000 and 10 bitcoin, which one do you spend first?

So, given people mostly purchasing bitcoin for long term saving, then they will purchase once a while and should not be very sensitive to transaction frequency and fee. Also, today there are more and more realtime mobile payment system that charges user 0 fee for instant transactions, it is a waste of time trying to scale bitcoin in order to get close to the speed and cost of any centralized solutions today

Maybe there is a real pressure for pools, exchanges and wallet service providers. But these organizations, being centralized, they should pursue clearing based solution to solve their problem, instead of giving pressure to core devs to let the blockchain serve them: Bitcoin is not designed to serve the institutions but person to person

Let's recheck Gavin' quote here:
"Segregated witness transactions won’t help with the current scaling bottleneck, which is how long it takes a one-megabyte 'block’ message to propagate across the network– they will take just as much bandwidth as before. There are several projects in progress to try to fix that problem (IBLTs, weak blocks, thin blocks, a “blocktorrent” protocol) and one that is already deployed and making one megabyte block propagation much faster than it would otherwise be (Matt Corallo’s fast relay network)."

So, a solution changes nodes' architecture, reduces their security, but does not help to reduce the major bottleneck, why rush with that? It is a smart way of thinking but the involved code change and potential security risk just make it less attractive than simply raise the block size to 2MB to deal with current block size limitation. In fact it is a very long term solution to totally change the bitcoin architecture,  thus require much more time and effort to test




legendary
Activity: 4410
Merit: 4766
This is rather far from reality. How do you think block relay works? You don't simply send a full block to everyone. Instead, to every connected node you send a short 'inv' (inventory) message, containing a hash of a block/tx you've just received (and verified). Then, these nodes can request this data from you, if they don't already have it. The point is that you don't know if nodes already have this block, and in order for a block to reach every node, you need this small overhead, you need to tell everyone that you have it.

The same is with alerts. In order for everyone to know about fraud you have to relay it to every node that wants it. I guess it can also be done by firstly notifying nodes with a short inv message. This way the overhead would be comparable to block propagation.


i understand that, i was trying to keep things simple.. rather than waffling.
EG
as you say
1.fullnodeA connect to node fullnode1
2.fullnodeA sends "i have upto blockheight 400,000"
3.fullnode1 only has 399,999, so requests 400,000 from fullnodeA, fullnodeA sends 400,000

now using (1) in previous image where X is 400,001

4.dodgy miner broadcast it has 400,001 height
5.FullnodeA is only at 400,000 so asks for 400,001 from miner and dodgy miner sends 400,001
6.FullnodeA sees rules broken. deletes it and listens for anyone with a valid 400,001 while broadcasting it only has 400,000 still
7.Fullnode1 only has 400,000 and is patiently listening to anyone with a valid 400,001

translate to layman
1.miner sends 400,001 , fullnodeA sees rules broken. deletes it and wont relay it..

sorry, but i was not trying to waffle using 7 lines of jargon. just to make a 1 line point..
i didnt want to have to explain how they handshake and resync the chains.. its errelavant to SWLite checks i just wanted to point out that fullnodes when seeing a dodgy block wont relay it.

i always try to keep things laymans simplified.. as then the general public can get their minds around the concept without waffle.

when talking to people about computer hardware.. i dont say RAM or Harddrive i say short term memory and long term memory, web camera is eyes, mouth is the speaker microphone is the ears,

where some would say
a CMOS webcam is: 640 x 480
a HDwebcam is: 1920 x 1080

i say
a CMOS webcam are: eyes with cataracts
a HDwebcam is: perfect 20:20 vision

im sorry that my explanations are not whitepaper material.. but this is not bitcoin-dev, im glad though that the only thing you can knit pick is my lack of technical jargon and waffle

but as i show in (3) when SWLiteA checks if X block exists.. im not talking about reync standard protocol.. as SWLite will need to do more than just check blockheight..

eg
fullnodeA receives 400,001 hash: h3hdkdfksdlfksjlkfj checks and finds rule is broken, deletes it to say it only has 400,000
fullnodeA receives 400,001 hash: ldpffdp4989df988i from different miner, checks and finds rule is good, stores it and now has 400,001

if i was to confuse people by talking about standard resyncing relationship. they would not understand me saying
SWLiteA asks does fullnodeA have 400,001
as eventually the answer would be yes..

my image was not about resyncing. (checking who has height) it was about a separate check
SWLiteA asks does fullnodeA have 400,001 hash: h3hdkdfksdlfksjlkfj
nope
which if we just done resyncing method you elluded to it would be
SWLiteA asks does fullnodeA have 400,001
yes
so again im sorry i was to simplistic.. and not talking about separate checks and hash's and txindex's etc.. but "does X exist" was much simpler to say

i just didnt bother to explain all the variables SWLite would need to check.

and that is why i didnt want to confuse people by even mentioning the standard resyncing part, and treated it as a different call, purely for easy understandings sake

legendary
Activity: 1386
Merit: 1009
If implemented correctly, there would be little to no bandwidth bloat. It means it would be runnable by low-end hardware, while achieving security higher than current SPV. Those demanding more security would still run full nodes.

my theory but from different angle
https://i.imgur.com/JOBRymb.jpg

(1) this is what the fullnode blockchain does.. checks the rules, if violation, drop it dont relay it.. dont alert anyone.. it didnt exist
now
(2) is segwit. it DOES NOT check data at first. but waits for an alert. as you can see SWLiteA alerts the next Swlite1, even though it has confirmed its duff.. the alerts keep happening..SWLite1 alerts SWLiteX who alerts SWLite& and so on and so on endlessly. look at all the extra queries each client is making.

(3) is segwit. that DOES check data at first. as you can see SWLiteA drops the data and doesnt relay.. now the rest of the network does not need to worry or check
This is rather far from reality. How do you think block relay works? You don't simply send a full block to everyone. Instead, to every connected node you send a short 'inv' (inventory) message, containing a hash of a block/tx you've just received (and verified). Then, these nodes can request this data from you, if they don't already have it. The point is that you don't know if nodes already have this block, and in order for a block to reach every node, you need this small overhead, you need to tell everyone that you have it.

The same is with alerts. In order for everyone to know about fraud you have to relay it to every node that wants it. I guess it can also be done by firstly notifying nodes with a short inv message. This way the overhead would be comparable to block propagation.

Anyway this is theoretical, I don't know how this will be implemented, there's no formal protocol.

With the 'low end' that this enables not even saving half over a full node, I rather doubt that many would opt for this unsecure near-full-node. (Thank goodness.)
I don't know how you estimated all this, but nevermind.
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
If implemented correctly,

One would hope.

Quote
there would be little to no bandwidth bloat.

Unsupported assertion. Could you make a proper defense of this?

Quote
It means it would be runnable by low-end hardware,

With the 'low end' that this enables not even saving half over a full node, I rather doubt that many would opt for this unsecure near-full-node. (Thank goodness.)

Quote
while achieving security higher than current SPV.

But still insecure.

Quote
Those demanding more security would still run full nodes.

This I can agree with.
Pages:
Jump to: