Pages:
Author

Topic: So who the hell is still supporting BU? - page 6. (Read 29824 times)

legendary
Activity: 1204
Merit: 1028
February 21, 2017, 11:03:15 AM
Raise the blocksize = automatically spammed with crap and blocks are full again = idiots wanting another blocksize increase. They will never stop crying.

Im up for a conservative 2MB increase AFTER segwit is implemented as recommended by 100% experts. No segwit = no blocksize increase, blame chinkshit miners.
full member
Activity: 322
Merit: 151
They're tactical
February 21, 2017, 10:40:11 AM
Optimisation can be seen in differents ways.

Here the concern is resources availabilty. The goal is too keep resources available a maximum, to keep the processing capacity at maximum possible, which include not wasting it on useless computation.

If the solution to maximise capacity in case of useless long block is to monopolize the ressource to check them, it's not really a big win. Ok that remove some of the processing from the main thread, it doesn't mean it's free either, or that those resources could not be used for something more useful.

With the nature of blockchain there will always be some time wasted to invalidate some long stuff, but if the goal is to optimize this, need to find a way to avoid this processing in a way that is shorter than actually validating it. Otherwise it's just pushing the problem away. If there was always some idle core who has nothing better to do than checking useless block, I say ok it's a win, otherwise it's not solving the pb.

With the thread thing it can ceil the waste to the time of longest block, which can maybe be a win in some cases.
legendary
Activity: 4410
Merit: 4766
February 21, 2017, 10:32:38 AM
The thing is i'm also thinking on general blockchain problem, because i'm trying to generalize blockchain stuff with a generic engine, and trying to generalize issue not depending on a particular blockchain topography or other blockchain / network specific thing.

Maybe for the current bitcoin configuration that can work, but if you take POS coins for example, where blocks can be sent more easily, or other configuration, it can be more of a problem.

Even if there are multiple core, there is still not an infinity of cores. If there is a finite amount of such block who has to be processed, a finite amount below the number of core, that can be ok.

Otherwise it's just pushing the problem away using more of finite resources.

but there is not an infinite amount of problems for you to worry about finite resources.

its like others who dont want bitcoin to naturally grow to 2mb-4mb blocks because you fear gigabytes by midnight(tonight). the reality is that REALITY, REAL WORLD results wont be gigabytes by midnight.

its like not letting a toddler learn to walk because you worry one day the kid when he grows to be an adult will have an accident crossing the road.

your saying prevent optimisation and cause issues out of a worry of something that's not a problem today and wont be a problem,
you seem to be creating a doomsday that has no bases in reality of actually occurring.
full member
Activity: 322
Merit: 151
They're tactical
February 21, 2017, 10:03:40 AM
The thing is i'm also thinking on general blockchain problem, because i'm trying to generalize blockchain stuff with a generic engine, and trying to generalize issue not depending on a particular blockchain topography or other blockchain / network specific thing.

Maybe for the current bitcoin configuration that can work, but if you take POS coins for example, where blocks can be sent more easily, or other configuration, it can be more of a problem.

Even if there are multiple core, there is still not an infinity of cores. If there is a finite amount of such block who has to be processed, a finite amount below the number of core, that can be ok.

Otherwise it's just pushing the problem away using more of finite resources.
legendary
Activity: 4410
Merit: 4766
February 21, 2017, 09:56:14 AM

No filtering required. When a potentially solved block arrives, spawn a thread to start validating it. When another potentially solved block arrives, spawn another thread to start validating it. First one to validate is the one you build your candidate for the next round atop.

That could improve a bit if there few blocks like this. But even if you can work the issue of shared access & deps, then what if there 100 such blocks ? Or 1000 ?

It can maybe just improve the resilience if there is few of them, but it's  just pushing the pb one thread away, and there isnt an infinity of processing power on a computer, even with threads.

Hence the real solution is more on how to avoid wasting processing on those blocks, rather attempting at processing them as long as no better block is validated. Instead of increasing this wasted time on several thread to buffer a bit the excessive processing time.

100 blocks?
1000 blocks?

um theres only 20ish pools and the chances of them all having a potential solved block all within the same few seconds is small
at most devs and pools have to worry about is a couple potential blocks competing to be added to the blockheight so dont throw fake doomsdays into a narrative,

If there is only a couple of them possibly being processed at the same time, that can help.

But still the goal is to avoid wasting processing time on them, not wasting more of it on multiple thread.

And there can still be this issue on single tx no ? It's not necessarily coming up only with solved blocks ?

And in that case there can still be many degenerate tx with sigops not coming only from the pools.


im not seeing the big devastating problem your saying dev's should avoid. these days most computers have multiple cores (including raspberry Pi) so if an full node implementation has a '64-bit' release. then automatically you would think devs have already programmed it so that it is already shifts processing resources across the different cores rather than queuing up on a single core.
so the problem and solution should have been solved by just having a 64bit version of a bitcoin full node
full member
Activity: 322
Merit: 151
They're tactical
February 21, 2017, 09:37:34 AM

No filtering required. When a potentially solved block arrives, spawn a thread to start validating it. When another potentially solved block arrives, spawn another thread to start validating it. First one to validate is the one you build your candidate for the next round atop.

That could improve a bit if there few blocks like this. But even if you can work the issue of shared access & deps, then what if there 100 such blocks ? Or 1000 ?

It can maybe just improve the resilience if there is few of them, but it's  just pushing the pb one thread away, and there isnt an infinity of processing power on a computer, even with threads.

Hence the real solution is more on how to avoid wasting processing on those blocks, rather attempting at processing them as long as no better block is validated. Instead of increasing this wasted time on several thread to buffer a bit the excessive processing time.

100 blocks?
1000 blocks?

um theres only 20ish pools and the chances of them all having a potential solved block all within the same few seconds is small
at most devs and pools have to worry about is a couple potential blocks competing to be added to the blockheight so dont throw fake doomsdays into a narrative,

If there is only a couple of them possibly being processed at the same time, that can help.

But still the goal is to avoid wasting processing time on them, not wasting more of it on multiple thread.

And there can still be this issue on single tx no ? It's not necessarily coming up only with solved blocks ?

And in that case there can still be many degenerate tx with sigops not coming only from the pools.

legendary
Activity: 4410
Merit: 4766
February 21, 2017, 09:18:33 AM

No filtering required. When a potentially solved block arrives, spawn a thread to start validating it. When another potentially solved block arrives, spawn another thread to start validating it. First one to validate is the one you build your candidate for the next round atop.

That could improve a bit if there few blocks like this. But even if you can work the issue of shared access & deps, then what if there 100 such blocks ? Or 1000 ?

It can maybe just improve the resilience if there is few of them, but it's  just pushing the pb one thread away, and there isnt an infinity of processing power on a computer, even with threads.

Hence the real solution is more on how to avoid wasting processing on those blocks, rather attempting at processing them as long as no better block is validated. Instead of increasing this wasted time on several thread to buffer a bit the excessive processing time.

100 blocks?
1000 blocks?

um theres only 20ish pools and the chances of them all having a potential solved block all within the same few seconds is small
at most devs and pools have to worry about is a couple potential blocks competing to be added to the blockheight so dont throw fake doomsdays into a narrative,
full member
Activity: 322
Merit: 151
They're tactical
February 21, 2017, 06:44:32 AM

No filtering required. When a potentially solved block arrives, spawn a thread to start validating it. When another potentially solved block arrives, spawn another thread to start validating it. First one to validate is the one you build your candidate for the next round atop.

That could improve a bit if there few blocks like this. But even if you can work the issue of shared access & deps, then what if there 100 such blocks ? Or 1000 ?

It can maybe just improve the resilience if there is few of them, but it's  just pushing the pb one thread away, and there isnt an infinity of processing power on a computer, even with threads.

Hence the real solution is more on how to avoid wasting processing on those blocks, rather attempting at processing them as long as no better block is validated. Instead of increasing this wasted time on several thread to buffer a bit the excessive processing time.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
February 21, 2017, 02:41:17 AM
Gotcha. How about if spammy looking transactions were filtered by each node when they are first broadcast? I suppose ultimately we'd be working toward prioritizing transactions based on their merits... I also see that it's risky to start judging transactions and dropping them, but perhaps there should be an obligation for the transaction creator to be legitimate and respectful of the limited blockspace? 
That would have zero effect. The transactions are INCLUDED IN THE BLOCK so if anything, ignoring the transactions in the first place means the node has to request them from the other node that sent it the block.

Now if you're talking about local block generation mining, bitcoind already does extensive ordering of transactions putting spammy transactions ultra low priority and likely not even stored in the mempool, so there's little room to move there. Filtering can always be improved but the problem isn't local block generation but block propagation of a intensely slow to validate block.
sr. member
Activity: 280
Merit: 250
February 21, 2017, 12:47:43 AM
Why bitcoin unlimited requires a hard fork? Without hard fork, it is possible to implement? What is the advantages to use BU instead?
legendary
Activity: 4410
Merit: 4766
February 21, 2017, 12:45:51 AM
Gotcha. How about if spammy looking transactions were filtered by each node when they are first broadcast? I suppose ultimately we'd be working toward prioritizing transactions based on their merits... I also see that it's risky to start judging transactions and dropping them, but perhaps there should be an obligation for the transaction creator to be legitimate and respectful of the limited blockspace?  

you dont need to filter out transactions. you just need a better 'priority' formulae that works with the 2 main issues.
1. bloat of tx vs blockspace allowed
2. age of coin vs how fast they want to respend

rather than the old one that is just based on the richer you are the more priority your rewarded(which became useless)
hero member
Activity: 686
Merit: 504
February 21, 2017, 12:08:58 AM
By the way, as far as I can understand the bitcoind code as I read it (and no doubt I will be corrected if I'm wrong which is fine), this is an attack vector that "choosing between them" is the least of your problems because it cannot process them both concurrently and then decide that (2) has finished long before it has finished processing (1). This means that if a (1) block hits even 1us before the (2), bitcoind will sit there processing it until it has finished before it can process (2). While this is purely a limitation of the software as it currently stands that it cannot process multiple blocks concurrently in a multithreaded fashion due to the coarse grained locking in the software, it doesn't change the fact there is no way to deal with this at present. It would require a drastic rewrite of the existing code to make the locking fine grained enough to do this, or a new piece of software written from the ground up; both of which carry their own risks.

Couldn't this issue be worked around by pre-filtering the traffic coming into the bitcoin daemon? "Bad" transaction detection would need to be at the protocol level. The simplest fix would be rejecting transactions over a certain size. Of course that's imperfect, but the filtering could become more fine-grained and accurate over time. It might even be possible to do this with firewall rules?
This is a block and the transactions it contains we're talking about, not a simply broadcast transaction, and we don't want to start filtering possibly valid blocks...

Gotcha. How about if spammy looking transactions were filtered by each node when they are first broadcast? I suppose ultimately we'd be working toward prioritizing transactions based on their merits... I also see that it's risky to start judging transactions and dropping them, but perhaps there should be an obligation for the transaction creator to be legitimate and respectful of the limited blockspace? 

I understand that multi-ithreading could open up a can of worms...  It still seems like raising the blocksize would be quite easy, and is the logical way forward. ( inb4 "OH MY GOD HARD FORK....") BTW Synthetic Fork seems like a decent proposal from the Chinese.
full member
Activity: 322
Merit: 151
They're tactical
February 20, 2017, 10:22:11 PM
Everyone wants more tps, but doesn't look like segwit/ln is going to become the solution in near future :p
legendary
Activity: 1092
Merit: 1000
February 20, 2017, 09:58:20 PM
This is, again, a limitation of the code rather than a protocol problem

I see we agree. On this small point, at any rate.

I wonder who may have an incentive to code up an alternative implementation? Maybe somebody who already has millions of dollars tied up in capital equipment - someone whose continued profitability requires making any optimization allowed by the protocol...

Bitcoin is moving to Schnorr sigs.  We need them not only to stop O(n^2) attacks, but also to enable tree signatures and fungibility, etc.

Why would anyone waste time trying to fix the obsolete Lamport scheme?

Perhaps the Unlimite_ crowd will decide to dig in their heels against Schnorr?

Oh wait, by blocking segwit they already have!  Grin

Point of Clarification

The Miners are the ones blocking segwit installation, you know the ones you depend on making new blocks and including transactions and keeping BTC secure.
Those are the guys blocking segwit, the ones your entire BTC network depends on.
Maybe they know more than you do or they just don't care what you think.   Cheesy

 Cool

FYI:
Combining BU & 8MB & Not Voting , Over 70% is refusing to install segwit.
In a normal race , that is a LandSlide.
What is strange is the pro-segwitters are too stupid to grasp that NO ONE WANTS SEGWIT or LN.  Tongue

Larger BlockSizes and keeping the Transactions ONCHAIN is what everyone wants.
sr. member
Activity: 243
Merit: 250
February 20, 2017, 08:13:55 PM
Roger Ver, notorious foe of Bitcoin, again in the center of the war, trying to sue and close one of Bitcoin Exchanges.  Sad Beware of the scoundrel and his hard fork unlimited if you're good bitcoiner...
legendary
Activity: 2576
Merit: 1087
February 20, 2017, 07:29:00 PM

'member this?

"Furioser and furioser!" said Alice.

Fear does funny things to people.

Wasn't your precious XT fork supposed to happen today?

Or was that yesterday?

Either way, for all the sturm und drang last year the deadline turned out to be a titanic non-event.

Exactly as the small block militia told you it would be.

The block size is still 1MB, and those in favor of changing cannot agree when to raise it, nor by how much, nor by what formula future increases should be governed.

You are still soaking in glorious gridlock despite all the sound and fury, and I am loving every second of your agitation.
  Smiley


I 'member.

Keep at it old boy, you're hilarious.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
February 20, 2017, 06:48:09 PM
Just as a question ... do you have an estimate of the % of solved blocks that are attributable to your SW?
On the client software side with cgminer it would be over 95% with what hardware is currently out there and knowing what is embedded on most of it. At the pool server end with ckpool it's probably less than 5% at present.
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
February 20, 2017, 06:03:32 PM
I wonder who may have an incentive to code up an alternative implementation? Maybe somebody who already has millions of dollars tied up in capital equipment - someone whose continued profitability requires making any optimization allowed by the protocol...
I'd be happy to write a new client with emphasis purely on performance and scalability from scratch... if someone wanted to throw large sums of money at me to do so and keep sending me more indefinitely to maintain and update it.

Well, that seems a perfectly reasonable stance.

Just as a question ... do you have an estimate of the % of solved blocks that are attributable to your SW?
legendary
Activity: 4410
Merit: 4766
February 20, 2017, 06:02:20 PM
I wonder who may have an incentive to code up an alternative implementation? Maybe somebody who already has millions of dollars tied up in capital equipment - someone whose continued profitability requires making any optimization allowed by the protocol...
I'd be happy to write a new client with emphasis purely on performance and scalability from scratch... if someone wanted to throw large sums of money at me to do so and keep sending me more indefinitely to maintain and update it.

maintain it indefinitely..?
a node should function without total reliance on one man to control what nodes do or dont do

if you just stuck to simple rules rather than half baked rules that skip around the issue with half promises im sure you can get some VC funding.

segwit for instance is not a final fix.. its not even an initial fix. malicious users will simply avoid using segwit keys and stick to native keys.

even schnorr is not a solution because again malicious people just wont use those keys. as it serves no benefit for those wanting to bloat and cause issues.

however finding real beneficial solutions such as a new 'priority' formulae that actually has a real purpose and solves a real problem, benefits everyone and knowing your in the pool dev arena.. thats something you should concentrate on
full member
Activity: 322
Merit: 151
They're tactical
February 20, 2017, 06:00:43 PM
I wonder who may have an incentive to code up an alternative implementation? Maybe somebody who already has millions of dollars tied up in capital equipment - someone whose continued profitability requires making any optimization allowed by the protocol...
I'd be happy to write a new client with emphasis purely on performance and scalability from scratch... if someone wanted to throw large sums of money at me to do so and keep sending me more indefinitely to maintain and update it.

It's a bit the idea with how im doing purenode Smiley

https://github.com/iadix/purenode Smiley

And there is already a multi thread sse raytracer that works with it Smiley
Pages:
Jump to: