Pages:
Author

Topic: So who the hell is still supporting BU? - page 8. (Read 29824 times)

legendary
Activity: 1092
Merit: 1000
February 20, 2017, 06:12:23 AM
@iCEBREAKER

You know what is funny , you actually think you are relevant,

You Not , but it is funny you think so.   Cheesy

If a side product can't keep up with innovation , that is on the Devs of the side product.

Electrum can update or get left behind , it is that simple.
You trying to say they are not able to get their code working , is really their problem,
not BTC Core and not the BU , only electrum.

Their were a lot of side products that are broken with every microsoft OS release.
Vendors either update & adapt or Die, their Choice.

Electrum is no different.
Thinking you are going to hold back an entire network because 1 vendor can not make their product work with it , is beyond stupid.


 Cool

The only thing irrelevant here is BUnlimistas and your nonsense.

BU has already been proven as a recipe for disaster. Everyone that's actually relevant in the bitcoin circles including Nick Szabo which knows more about all of this than you will ever know, is already saying we are wasting time by not activating segwit then max up with LN for VISA competition purposes.

You guys need to get wiped. You fell victim of the Roger propaganda machine and now you are contributing to stagnating bitcoin development, good job.


So you are saying you are Nick Szabo's girlfriend.  Cheesy

No offense to Szabo , but if he was all you claim , we would be talking about his attempt (Bitgold) and not bitcoin.

Hate to break it to you , I came to my conclusions about segwit & LN all on my own,
that what happens when you can think for yourself and not someone's else's puppet.

Segwit will not be adopted on BTC or LTC.
Reason being the miners don't care what you or Szabo think either?
Cheesy

 Cool
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
February 20, 2017, 05:19:47 AM
By the way, as far as I can understand the bitcoind code as I read it (and no doubt I will be corrected if I'm wrong which is fine), this is an attack vector that "choosing between them" is the least of your problems because it cannot process them both concurrently and then decide that (2) has finished long before it has finished processing (1). This means that if a (1) block hits even 1us before the (2), bitcoind will sit there processing it until it has finished before it can process (2). While this is purely a limitation of the software as it currently stands that it cannot process multiple blocks concurrently in a multithreaded fashion due to the coarse grained locking in the software, it doesn't change the fact there is no way to deal with this at present. It would require a drastic rewrite of the existing code to make the locking fine grained enough to do this, or a new piece of software written from the ground up; both of which carry their own risks.

Couldn't this issue be worked around by pre-filtering the traffic coming into the bitcoin daemon? "Bad" transaction detection would need to be at the protocol level. The simplest fix would be rejecting transactions over a certain size. Of course that's imperfect, but the filtering could become more fine-grained and accurate over time. It might even be possible to do this with firewall rules?
This is a block and the transactions it contains we're talking about, not a simply broadcast transaction, and we don't want to start filtering possibly valid blocks...
hero member
Activity: 686
Merit: 504
February 20, 2017, 04:29:00 AM
Riddle me this: built off a parent of the same block height, a miner is presented -- at roughly the same time:
1) an aberrant block that takes an inordinate amount of time (e.g.,  10 minutes) to verify but is otherwise valid;
2) a 'normal' valid block that does not take an inordinate amount of time to verify; and
By the way, as far as I can understand the bitcoind code as I read it (and no doubt I will be corrected if I'm wrong which is fine), this is an attack vector that "choosing between them" is the least of your problems because it cannot process them both concurrently and then decide that (2) has finished long before it has finished processing (1). This means that if a (1) block hits even 1us before the (2), bitcoind will sit there processing it until it has finished before it can process (2). While this is purely a limitation of the software as it currently stands that it cannot process multiple blocks concurrently in a multithreaded fashion due to the coarse grained locking in the software, it doesn't change the fact there is no way to deal with this at present. It would require a drastic rewrite of the existing code to make the locking fine grained enough to do this, or a new piece of software written from the ground up; both of which carry their own risks.

Couldn't this issue be worked around by pre-filtering the traffic coming into the bitcoin daemon? "Bad" transaction detection would need to be at the protocol level. The simplest fix would be rejecting transactions over a certain size. Of course that's imperfect, but the filtering could become more fine-grained and accurate over time. It might even be possible to do this with firewall rules?
full member
Activity: 322
Merit: 151
They're tactical
February 19, 2017, 01:40:05 PM
For me the only real issue I would have with LN is the marking as locked which is misleading, as lock should mean "NAK" nothing going on with those bit coins, whereas it's more locked away in a parallel system, and they should be marked as such on the main chain.

That would clarify things more, and maybe it can avoid a bit the fractional reserve issue or make it easier to detect as the bitcoin value would be marked as undetermined on the blockchain as long as it's being used on LN, and the up to date state can be then fetched from LN if needed, or the state being undetermined , instead of marking them as locked and "nak". Or making the lock explicitly a full exclusive lock instead of just exclusive write lock.

Because there it still make the btc marked as locked, whereas in reality they are still being used. And then it start to look more like banks who do stuff with your money without saying it, as the fund are supposed to be locked on a bank account , but In fact they are not locked at all Shocked
legendary
Activity: 4214
Merit: 4458
February 19, 2017, 12:48:46 PM
Yes, segwit isn't needed for payment channels, but segwit adds a lot of cool stuff, including schnorr which can make bitcoin more private. The positives of segwit outweight the cons, everyone knows this its 2017.

seems your reading a script.

schnorr doesnt even add any benefit to LN either.
LN uses multisig.
so it still requires 2 signatures. not a single schnorr sig

so its not going to make any benefit to LN.

yes for other transactions where someone has many unspents of one address to spend.. can reduce the list of signatures because one sig is proof of ownership of all the unspents to the same public address. instead of signing each unspent. but its not needed for LN

plus people wanting to spam the network. are not going to use schnorr with their dust inputs. they will stick with native keys to spam their tx's. so even schnorr is not a 100% spam fix. again empty gesture
legendary
Activity: 868
Merit: 1004
February 19, 2017, 12:17:59 PM
activating segwit then max up with LN for VISA competition purposes.

segwit solves nothing because people wanting to spam and double spend scam will just stick to native keys. meaning segwit is just a gesture/empty promise. not a 100% fix.

LN can be designed without segwit.
after all its just a 2in 2out tx which is not a problem of sigops. and not an issue of malleability.
LN functions using dual signing. so its impossible for 1 person to malleate because the second person needs to sign it and could easily check if malleated before signing and refuse to sign if someone malleates.
and the person malleating cannot double spend by making a second tx because again its a dual signature. so again LN doesnt need segwit


nothing is stopping a well coded LN to be made.. we just dont want devs to think a commercial hub LN service to be the end goal of bitcoin scaling. it should just be a voluntary side service. much like using coinbase or bitgo

Yes, segwit isn't needed for payment channels, but segwit adds a lot of cool stuff, including schnorr which can make bitcoin more private. The positives of segwit outweight the cons, everyone knows this its 2017.

Too sad that again in the SW supporting forums there is just only fear about having an open discussion

https://www.reddit.com/r/btc/comments/5ut05w/why_im_against_bu/ddxiool/

That way I see no chance to get along with it (SW) in a open world with open blockchain on top of an open internet.


Simply this behaviour makes me getting on distance with SW and activates critical thinking - luckily this still works.

 Wink


A nice post by that guy. Funny to see jstolf the resident PhD troll struggling to keep up with the conversation.
hv_
legendary
Activity: 2506
Merit: 1055
Clean Code and Scale
February 19, 2017, 12:04:56 PM
Too sad that again in the SW supporting forums there is just only fear about having an open discussion

https://www.reddit.com/r/btc/comments/5ut05w/why_im_against_bu/ddxiool/

That way I see no chance to get along with it (SW) in a open world with open blockchain on top of an open internet.


Simply this behaviour makes me getting on distance with SW and activates critical thinking - luckily this still works.

 Wink
legendary
Activity: 4214
Merit: 4458
February 19, 2017, 11:53:52 AM
activating segwit then max up with LN for VISA competition purposes.

segwit solves nothing because people wanting to spam and double spend scam will just stick to native keys. meaning segwit is just a gesture/empty promise. not a 100% fix.

LN can be designed without segwit.
after all its just a 2in 2out tx which is not a problem of sigops. and not an issue of malleability.
LN functions using dual signing. so its impossible for 1 person to malleate because the second person needs to sign it and could easily check if malleated before signing and refuse to sign if someone malleates.
and the person malleating cannot double spend by making a second tx because again its a dual signature. so again LN doesnt need segwit


nothing is stopping a well coded LN to be made.. we just dont want devs to think a commercial hub LN service to be the end goal of bitcoin scaling. it should just be a voluntary side service. much like using coinbase or bitgo
legendary
Activity: 868
Merit: 1004
February 19, 2017, 11:23:50 AM
@iCEBREAKER

You know what is funny , you actually think you are relevant,

You Not , but it is funny you think so.   Cheesy

If a side product can't keep up with innovation , that is on the Devs of the side product.

Electrum can update or get left behind , it is that simple.
You trying to say they are not able to get their code working , is really their problem,
not BTC Core and not the BU , only electrum.

Their were a lot of side products that are broken with every microsoft OS release.
Vendors either update & adapt or Die, their Choice.

Electrum is no different.
Thinking you are going to hold back an entire network because 1 vendor can not make their product work with it , is beyond stupid.


 Cool

The only thing irrelevant here is BUnlimistas and your nonsense.

BU has already been proven as a recipe for disaster. Everyone that's actually relevant in the bitcoin circles including Nick Szabo which knows more about all of this than you will ever know, is already saying we are wasting time by not activating segwit then max up with LN for VISA competition purposes.

You guys need to get wiped. You fell victim of the Roger propaganda machine and now you are contributing to stagnating bitcoin development, good job.
full member
Activity: 322
Merit: 151
They're tactical
February 19, 2017, 09:31:23 AM
Yeah i think LN is a good idea in the overall and is reasonable trade off that can have its advantage, but it has to be taken for that it is too, and not saying it has no impact on the way it's done vs regular bitcoin transaction and it doesn't give the same security and scrutiny than the global block chain with the proof of work.

And it doesn't have the mechanism to make it as transparent  and reliable as a cache.

It doesn't even really supposed to act as a cache at all.

And even if it was, it's far to be as simple as it's same if using a cache or not like the hd example.

Using cache in concurrent system is full of "tl;dr ignored" problem. One sure thing is it's more complicated than a 3 tweets issue. And even so, the effective gain dépend entierely on good management of data temporality otherwise it's either useless or unsafe.

The effect is not as dramatic as with browser cache & internet data because supposedly the people owning the key to the btc being locked on the LN already have more or less exclusive access to it, but if there was potentially several independent users using those same keys, it would make a difference.



legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
February 19, 2017, 08:57:51 AM
saying LN is a caching solution is like saying it would be normal if a browser would lock down a picture for the whole internet because it need to use it locally.

[tl;dr ignored]

Payment channels only lock down as many Bitcoins as the participants see fit to lock down.  The rest of the 21M coins may keep shuffling around without restriction.

A static .gif may be duplicated across the edge of CDNs (caching proxies) as needed; unlike e-cash, there is no double-spending problem for cat memes.   Grin

If random small writes are literally the work that kills NAND the most, why do you think random small writes are good for blockchains?

Why not have a distributed layer of ad hoc write caches consolidating and optimizing blockchain commits, especially when the ROI is a TPS increase from ~12tps to basically infinity tps?   Huh

If you insist on Bitcoin competing with commercial banking (ie Visa/Paypal/ACH/SEPA) and absolutely must use it for Starbucks lattes, payment channels are the only way to get there from here.

Unlimite_ is vaporware; Core has working code ready to start laying the foundation for scaling Bitcoin to high-powered super-money.

If segwit is implemented and we still have insufficient tps capacity, the Big-blockers will have a much more believable, perhaps compelling, case for an increase to 2MB.

Blocking segwit and LN out of spite while implausibly moaning about how Bitcoin needs to be unlimited is the epitome of cynical hypocrisy.

I admire the cynicism, but abhor the hypocrisy.  Looking forward to the Unlimite_ #REKT thread...
full member
Activity: 322
Merit: 151
They're tactical
February 19, 2017, 05:44:12 AM
Well as far as i understand, LN channels can be somehow shut down, via certain glitches or other, and in that case, what would remain of the operation made in that channel ?

And what validity would this journalizing have regarding on chain state if there are difference at the end ?

The problem with caching is not about HD crash, but if the controller is stopped before the cache is a actually wrote, even if the hard drive works well the data is still lost.


Just to push analogy with cache to show certain caveeat, with smp system and cpu cache, there are certain case when the memory is shared with other chips with dma, or virtual pagination, in system with high concurency on the data, cpu cache can become "out of date", even with sse2 there are certain thing to help dealing with this, but as far as i know, most os disable caching on certain shared memory because of all the issues with cache, and instruction reordering etc When having access to up to date data in concurent system is more important than fast access to potentially out of date data.

If LN is to be seen as a cache system , it doesn't look like they are taking all the precautions for it to be really safe.

Cache  are easily safe when all the write access to the data are made throught the same interface doing the caching, which is not the case with bitcoin & LN.

With hard drive it works because all the access goes throught the same controller doing the caching.

But anyway as LN locks the bitcoin on the main chain, it's not even really a true cache system, because the principle of a cache system is to fasten multiple access on the same data, as the bitcoin are locked, the channel have exclusive access to it, and so it's not really to be seen as a true system of blockchain caching.

There are multiple implementations of the payment channel schema.  Of course they have different trade-offs.

You remind me of young GMAX proving Bitcoin was impossible, and I hope you are similarly happy when shown to be wrong about LN.

I thought you might be interested in this tweet; to me it seems there is an interesting congruence afoot.  Convergent morphology perhaps....or simply Data Structures 101?   Cheesy


To paraphrase:
"What sucks about directly buying Frappuchinos with Bitcoin?"
"The biggest issue I think is random small tx are literally the work that kills Blockchain the most."

The message seems to be that without write caches we don't get to have nice things!  Tongue


In terms of mechanical engineering, write caches function as a shim which reduces friction and the resulting heat/damage.

https://en.wikipedia.org/wiki/Shim_(spacer)

Anyway saying LN is a caching solution is like saying it would be normal if a browser would lock down a picture for the whole internet because it need to use it locally.

LN miss many thing to be able to be called a true cache system.


Memory management with smp & pci bus is a very complex things, and architecture evolved with more instruction and better instructions pipelining, more functions coming with c11 & openmp,  but handling of cache with smp/pci/south bus is far from trivial.

The issues can be seen more clearly with arm architecture because the cpu architecture is much simpler and they dont have built in handling of these issue of cache and concurent access with south bus and memory bridge,memory space conversion etc.


https://en.m.wikipedia.org/wiki/Conventional_PCI#PCI_bus_bridges

Posted writes
Generally, when a bus bridge sees a transaction on one bus that must be forwarded to the other, the original transaction must wait until the forwarded transaction completes before a result is ready. One notable exception occurs in the case of memory writes. Here, the bridge may record the write data internally (if it has room) and signal completion of the write before the forwarded write has completed. Or, indeed, before it has begun. Such "sent but not yet arrived" writes are referred to as "posted writes", by analogy with a postal mail message. Although they offer great opportunity for performance gains, the rules governing what is permissible are somewhat intricate.


Caching help when it take in account temporality when multiple access on the same data are made , it can help skipping some likely useless long write, but it's still quite probabilistic.

LN would be a cache if it didn't lock the resources on the main chain, and would be able to detect with good ratio of success when the btc are only going to be used locally and keep the modification off chain on the "local cache" when it's most likely not to be used outside of the local channel, and only write it to the main chain when the state is more likely to be shared outside of the local cache shared by a limited number of participant , and it should always keep the local cache updated from the main chain when there is a modification in the on chain state. And anytime there is an access to the state of the chain outside of the local cache it should be wrote back to the main network as fast as possible or the request could not be processed before the state is fully synchronized. The efficiency of cache system dépend on how successful it is at guessing when the data is going to be used again in the local cache before a modification on it happen outside of the cache, otherwise there is zero gain.






https://en.m.wikipedia.org/wiki/Temporal_database

https://en.m.wikipedia.org/wiki/Locality_of_reference

Locality is merely one type of predictable behavior that occurs in computer systems. Systems that exhibit strong locality of reference are great candidates for performance optimization through the use of techniques such as the caching, prefetching for memory and advanced branch predictors at the pipelining stage of processor core

Temporal locality
If at one point a particular memory location is referenced, then it is likely that the same location will be referenced again in the near future. There is a temporal proximity between the adjacent references to the same memory location. In this case it is common to make efforts to store a copy of the referenced data in special memory storage, which can be accessed faster. Temporal locality is a special case of spatial locality, namely when the prospective location is identical to the present location.


it's this kind of problematic involved for efficient caching. With a prospective approach on how likely the data is to change in a certain time frame, which allow for faster cacheed access during this time frame.


With transaction it mean you need to predict if the state of the onchain input is going to be potentially accessed within the time frame when it's  used in the local cache. In case it's only going to be a accessed in the local cachenfor a certain period of time, it's worth keeping it in the cache, if the data is shared with other processes, and they need to read it or modify it during that time frame, the cache is useless and data need to be updated from/to the main chain for each operation.
legendary
Activity: 1092
Merit: 1000
February 19, 2017, 02:25:00 AM
@iCEBREAKER

You know what is funny , you actually think you are relevant,

You Not , but it is funny you think so.   Cheesy

If a side product can't keep up with innovation , that is on the Devs of the side product.

Electrum can update or get left behind , it is that simple.
You trying to say they are not able to get their code working , is really their problem,
not BTC Core and not the BU , only electrum.

Their were a lot of side products that are broken with every microsoft OS release.
Vendors either update & adapt or Die, their Choice.

Electrum is no different.
Thinking you are going to hold back an entire network because 1 vendor can not make their product work with it , is beyond stupid.


 Cool
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
February 19, 2017, 02:00:21 AM
Well as far as i understand, LN channels can be somehow shut down, via certain glitches or other, and in that case, what would remain of the operation made in that channel ?

And what validity would this journalizing have regarding on chain state if there are difference at the end ?

The problem with caching is not about HD crash, but if the controller is stopped before the cache is a actually wrote, even if the hard drive works well the data is still lost.


Just to push analogy with cache to show certain caveeat, with smp system and cpu cache, there are certain case when the memory is shared with other chips with dma, or virtual pagination, in system with high concurency on the data, cpu cache can become "out of date", even with sse2 there are certain thing to help dealing with this, but as far as i know, most os disable caching on certain shared memory because of all the issues with cache, and instruction reordering etc When having access to up to date data in concurent system is more important than fast access to potentially out of date data.

If LN is to be seen as a cache system , it doesn't look like they are taking all the precautions for it to be really safe.

Cache  are easily safe when all the write access to the data are made throught the same interface doing the caching, which is not the case with bitcoin & LN.

With hard drive it works because all the access goes throught the same controller doing the caching.

But anyway as LN locks the bitcoin on the main chain, it's not even really a true cache system, because the principle of a cache system is to fasten multiple access on the same data, as the bitcoin are locked, the channel have exclusive access to it, and so it's not really to be seen as a true system of blockchain caching.

There are multiple implementations of the payment channel schema.  Of course they have different trade-offs.

You remind me of young GMAX proving Bitcoin was impossible, and I hope you are similarly happy when shown to be wrong about LN.

I thought you might be interested in this tweet; to me it seems there is an interesting congruence afoot.  Convergent morphology perhaps....or simply Data Structures 101?   Cheesy


To paraphrase:
"What sucks about directly buying Frappuchinos with Bitcoin?"
"The biggest issue I think is random small tx are literally the work that kills Blockchain the most."

The message seems to be that without write caches we don't get to have nice things!  Tongue


In terms of mechanical engineering, write caches function as a shim which reduces friction and the resulting heat/damage.

https://en.wikipedia.org/wiki/Shim_(spacer)
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
February 19, 2017, 12:58:39 AM
Electrum devs cannot change the fact Bitcoin uses Lamport sigs.  As currently implemented, Lamport sig validation scales quadratically with tx size.

Nobody can fix that until we have segwit and may then change to Schnorr sigs.


There is always more than one way to make something work, saying there is only 1 way shows a lack of imagination.

No offense to Lauda
, There is always more than 1 way to Skin a Cat.  Wink

Until the Electrum dev actually come out and say they can't make it work, your assumptions are irrelevant.

There you go again, avoiding the concrete specifics of the O(n^2) attack and retreating into lazy hand-waving generalizations about "always" and "something" and useless slogans about cats.

It must really suck going through life encumbered by such sloppy, random thought processes.  You must greatly resent those of us with the ability to function at much higher levels of focused attention to detail.

There is no known way to change the current Lamport sig implementation such that it avoids quadratic scaling.

That algorithm is not trivially parallelizable.  We can't get there from here.  If we want lightning fast multi-threaded validation, that requires segwit+Schnorr.

Gavin already explained why.

Here, I'll repeat it one more time in the hope that this spoonful of nutritious information somehow manages to make its way into your mental metabolism.

Here comes the plane!  *Vrrrrooooom!*  Open wide!  Nummy nummy knowledge for sweet widdle baby kiklo!

The attack is caused by the way the signature hash is computed-- all n bytes of the transaction must be hashed for every signature operation.

Perhaps if you took the time to read his post and understand all of it, I wouldn't have to sit here spoon feeding you premasticated fact and wiping most of it off your chin/bib/high chair when you just spit it out rather than successfully starting the digestion process.
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
February 19, 2017, 12:15:27 AM
segwit will never be activated , just can't get past that furball you call a brain

You are new here so it's understandable you are not aware that I've been cheering for stalemate and gridlock since 2015 (when you were restricted to the Introduce Yourself noob quarantine thread).

'member this?




'member this?

"Furioser and furioser!" said Alice.

Fear does funny things to people.

Wasn't your precious XT fork supposed to happen today?

Or was that yesterday?

Either way, for all the sturm und drang last year the deadline turned out to be a titanic non-event.

Exactly as the small block militia told you it would be.

The block size is still 1MB, and those in favor of changing cannot agree when to raise it, nor by how much, nor by what formula future increases should be governed.

You are still soaking in glorious gridlock despite all the sound and fury, and I am loving every second of your agitation.
  Smiley


'member this?

Love watching the Blockstream idiot hijaakers gloat.  

It's beautifully Ironic.  Today they gloat a Mike's failure to hijaak bitcoin.

Tomorrow we laugh at all of them for their failed hijaak attempt of Core.

www.BitcoinClassic.com #Winner.
* Classic isn't XT.  It has actual concensus, support, and is reasonable.  And let's remember this... as much as I think Mike Hearns is a traiterous ass, I totally respect him for being the first to standup and throw a punch at these hijaaking, whiny, lying, manipulating, censorship wielding, bitcoin crippling losers at Blockstream/Core.

Oh, and don't count Mike out yet.  He is in my guess a huge threat to Bitcoin.  I predict that when chaos around the Classic Fork is going strong - the R3CEV/Hyperledger/Bank team will strike with a Fiat Coin announcement.

What makes you think Blockstream is going to pull a Hearn (IE, write self-indulgent Goodbye Cruel World + Bitcoin Obiturary Medium post, rage quit, and rage dump) tomorrow?

All you Gavinistas' 6 months of whining and threats has accomplished is providing the rest of us with amusement.

You haven't moved the needle towards Gavinblocks at all, not one iota.

We warned you the outcome of your contentious vanity fork and governance coup attempts would be gridlock, which effectively preserves the 1MB status quo.

In response, you guys amped up the drama, using ridiculous bullet words like censorship/dictatorship/hijack/crippling/strangling to goad the Reddit mob into lighting their torches and stamping about chanting "rabble rabble rabble!"

How's that working for you?

Are you starting to understand why you can't win this fight, or do I need to make a new #REKT meme?   Smiley
legendary
Activity: 1092
Merit: 1000
February 18, 2017, 11:40:18 PM
The O(n^2) sigop attack cannot be mitigated with Electrum X or by simply buying a faster Xeon server.

As Gavin said, we need to move to Schnorr sigs to get (sub)linear sig validation time scaling.

And AFAIK moving to Schnorr sigs at minimum requires implementing Core's segwit soft fork.

Informed Bitcoiners like Adam Back and the rest of Core plan to do segwit first, because it pays off technical debt and thus strengthens the foundation necessary to support increased block sizes later.


So you are saying their Developer is not competent enough to find a solution.
I think if the Developer of electrum was actually worried about it , he would have mentioned it when asked point blank on the blocksize issue.



Electrum devs cannot change the fact Bitcoin uses Lamport sigs.  As currently implemented, Lamport sig validation scales quadratically with tx size.

Nobody can fix that until we have segwit and may then change to Schnorr sigs.


There is always more than one way to make something work, saying there is only 1 way shows a lack of imagination.

No offense to Lauda
, There is always more than 1 way to Skin a Cat.  Wink

Until the Electrum dev actually come out and say they can't make it work, your assumptions are irrelevant.

 Cool
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
February 18, 2017, 08:11:48 PM
I guess the bitcoind devs have never worked with multithreading then*? Such a pity. And such a cutting-edge 1990's technology. If inordinately large signature transactions ever become A Thing, miners employing bitcoind will be bankrupted by smarter miners.

C'est la guerre. To the wiser go the spoils.

*Yes, unnecessarily provocative. But it certainly points out just one more instance where devs are working on the wrong things.
It is a bit too harsh. While I'm continually frustrated by some of the limitations of the bitcoind client and the lack of emphasis on things that I am concerned about (since mining with it is my personal interest), it does not pay credence to how such code realistically evolves and the difficulties of keeping massive rewrites safely in check while evolving the client in multiple directions. I'm not much of a c++ coder myself so I can't do anything much to help but at least I do understand what's involved in maintaining such a massive project. It is impossible to know what issues will become problematic in the future when first starting a project; they only become apparent as it evolves and need to be tackled in a methodical manner. Emphasis has only been placed on speed of block processing, propagation, and work template generation in recent times and the improvement is already substantial but has a very long way to go. If the client was written from the ground up now with emphasis in those areas, knowing what we already do now know, it would no doubt look very different. Some things, though, are protocol limitations and not the client. Things like the quadratic scaling issue are in the current bitcoin protocol design and no amount of client rewrites without a protocol change will get around that. I'm not arguing for one only. Both need to be addressed.
legendary
Activity: 3024
Merit: 1640
lose: unfind ... loose: untight
February 18, 2017, 07:29:20 PM
Riddle me this: built off a parent of the same block height, a miner is presented -- at roughly the same time:
1) an aberrant block that takes an inordinate amount of time (e.g.,  10 minutes) to verify but is otherwise valid;
2) a 'normal' valid block that does not take an inordinate amount of time to verify; and
3) an invalid block.
By the way, as far as I can understand the bitcoind code as I read it (and no doubt I will be corrected if I'm wrong which is fine), this is an attack vector that "choosing between them" is the least of your problems because it cannot process them both concurrently and then decide that (2) has finished long before it has finished processing (1). This means that if a (1) block hits even 1us before the (2), bitcoind will sit there processing it until it has finished before it can process (2). While this is purely a limitation of the software as it currently stands that it cannot process multiple blocks concurrently in a multithreaded fashion due to the coarse grained locking in the software, it doesn't change the fact there is no way to deal with this at present. It would require a drastic rewrite of the existing code to make the locking fine grained enough to do this, or a new piece of software written from the ground up; both of which carry their own risks.

I guess the bitcoind devs have never worked with multithreading then*? Such a pity. And such a cutting-edge 1990's technology. If inordinately large signature transactions ever become A Thing, miners employing bitcoind will be bankrupted by smarter miners.

C'est la guerre. To the wiser go the spoils.

*Yes, unnecessarily provocative. But it certainly points out just one more instance where devs are working on the wrong things.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
February 18, 2017, 06:58:23 PM
Riddle me this: built off a parent of the same block height, a miner is presented -- at roughly the same time:
1) an aberrant block that takes an inordinate amount of time (e.g.,  10 minutes) to verify but is otherwise valid;
2) a 'normal' valid block that does not take an inordinate amount of time to verify; and
By the way, as far as I can understand the bitcoind code as I read it (and no doubt I will be corrected if I'm wrong which is fine), this is an attack vector that "choosing between them" is the least of your problems because it cannot process them both concurrently and then decide that (2) has finished long before it has finished processing (1). This means that if a (1) block hits even 1us before the (2), bitcoind will sit there processing it until it has finished before it can process (2). While this is purely a limitation of the software as it currently stands that it cannot process multiple blocks concurrently in a multithreaded fashion due to the coarse grained locking in the software, it doesn't change the fact there is no way to deal with this at present. It would require a drastic rewrite of the existing code to make the locking fine grained enough to do this, or a new piece of software written from the ground up; both of which carry their own risks.
Pages:
Jump to: