Pages:
Author

Topic: So who the hell is still supporting BU? - page 7. (Read 29824 times)

-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
February 20, 2017, 06:54:43 PM
I wonder who may have an incentive to code up an alternative implementation? Maybe somebody who already has millions of dollars tied up in capital equipment - someone whose continued profitability requires making any optimization allowed by the protocol...
I'd be happy to write a new client with emphasis purely on performance and scalability from scratch... if someone wanted to throw large sums of money at me to do so and keep sending me more indefinitely to maintain and update it.
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
February 20, 2017, 06:08:48 PM
This is, again, a limitation of the code rather than a protocol problem

I see we agree. On this small point, at any rate.

I wonder who may have an incentive to code up an alternative implementation? Maybe somebody who already has millions of dollars tied up in capital equipment - someone whose continued profitability requires making any optimization allowed by the protocol...

Bitcoin is moving to Schnorr sigs.  We need them not only to stop O(n^2) attacks, but also to enable tree signatures and fungibility, etc.

Why would anyone waste time trying to fix the obsolete Lamport scheme?

Perhaps the Unlimite_ crowd will decide to dig in their heels against Schnorr?

Oh wait, by blocking segwit they already have!   Grin
full member
Activity: 322
Merit: 151
They're tactical
February 20, 2017, 06:01:56 PM
With the code im doing with purenode, I have good hope it will bring great simplification to the code base and allow for experiment to be kick started more easily Smiley and normally it's already thought from the first asm instruction to be thread smart, with object references, atomic instructions & all. And it should adapt to most chains, including btc.
legendary
Activity: 3024
Merit: 1640
lose: unfind ... loose: untight
February 20, 2017, 05:35:28 PM
This is, again, a limitation of the code rather than a protocol problem

I see we agree. On this small point, at any rate.

I wonder who may have an incentive to code up an alternative implementation? Maybe somebody who already has millions of dollars tied up in capital equipment - someone whose continued profitability requires making any optimization allowed by the protocol...
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
February 20, 2017, 05:27:04 PM
By the way, as far as I can understand the bitcoind code as I read it (and no doubt I will be corrected if I'm wrong which is fine), this is an attack vector that "choosing between them" is the least of your problems because it cannot process them both concurrently and then decide that (2) has finished long before it has finished processing (1). This means that if a (1) block hits even 1us before the (2), bitcoind will sit there processing it until it has finished before it can process (2). While this is purely a limitation of the software as it currently stands that it cannot process multiple blocks concurrently in a multithreaded fashion due to the coarse grained locking in the software, it doesn't change the fact there is no way to deal with this at present. It would require a drastic rewrite of the existing code to make the locking fine grained enough to do this, or a new piece of software written from the ground up; both of which carry their own risks.

Couldn't this issue be worked around by pre-filtering the traffic coming into the bitcoin daemon? "Bad" transaction detection would need to be at the protocol level. The simplest fix would be rejecting transactions over a certain size. Of course that's imperfect, but the filtering could become more fine-grained and accurate over time. It might even be possible to do this with firewall rules?
This is a block and the transactions it contains we're talking about, not a simply broadcast transaction, and we don't want to start filtering possibly valid blocks...

No filtering required. When a potentially solved block arrives, spawn a thread to start validating it. When another potentially solved block arrives, spawn another thread to start validating it. First one to validate is the one you build your candidate for the next round atop.
You missed the point of my original response entirely then - you CAN'T spawn a thread to validate it because of the locking I said before. If you spawn a thread to validate the block, nothing else can do anything in the meantime anyway - you can't process transactions, you can't validate other blocks. This is, again, a limitation of the code rather than a protocol problem but it would take a massive rewrite to get around this.

LMFAO.  This is like watching a sincere but naive do-gooder try to convince brain-damaged aborigines they should stop huffing gas.

There is missing the point and then there is incapable of understanding the point.

jbreher and classicsucks fall into the latter category, because obviously neither has read SICP.   Tongue
full member
Activity: 322
Merit: 151
They're tactical
February 20, 2017, 05:16:43 PM

if you start introducing different layers, you starts introducing hierarchies and 'kings' and power grabbing..

all full nodes should all do the same job, because thats the purpose of the network. they  tall follow the same rules and all treat transactions the same.

For me it's more a question of macro organisation than power grabbing.

If you take for example how a good caching proxy or intranet solution would work to optimize the traffic and caching, because they have access to all the traffic from the subnetwork,  and can optimize and cache a certain number of things more efficiently because they have a "meta node" view of the whole traffic, it can know what other node of the subnet are requesting or sending to who, and that can allow for some macro management which is impossible to do on a single node level.

Even if it allow some optimization on the local subnetwork due to macro management, you wouldnt say it's "power grabbing" or even hierarchic, even if it see the traffic on a level above the individual nodes. The role is still mostly passive, just macro management, and already I believe this could open way to optimize bitcoin traffic inside of subnetwork.

And only what really need to be read or sent outside of the local network will really be sent. Aka mined.

It's more this kind of idea than introduction of true layer or hierarchy Smiley


I can say I have been doing my lot of pretty hard-core stuff with cores and interrupt & stuff, if there is a constant I can say in thus thing is :

 you want something to scale ? You have to divide it into independent subset who can be processed separately.

That's the only golden rule for good scaling.

Can call this fragmenting or hierachizing, but it's just about organising data in sub group when it make more sense to process them by group because they are highly related with each others.

Kinda like octree are used in video game to limit the amount of computation a client has to do on what he can see or interact with.

Not that those subnetwork have to be static or follow 100% deterministic pattern, and can be adapted when it make sense if certain nodes interact more with certain addresses than others.

-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
February 20, 2017, 04:48:45 PM
By the way, as far as I can understand the bitcoind code as I read it (and no doubt I will be corrected if I'm wrong which is fine), this is an attack vector that "choosing between them" is the least of your problems because it cannot process them both concurrently and then decide that (2) has finished long before it has finished processing (1). This means that if a (1) block hits even 1us before the (2), bitcoind will sit there processing it until it has finished before it can process (2). While this is purely a limitation of the software as it currently stands that it cannot process multiple blocks concurrently in a multithreaded fashion due to the coarse grained locking in the software, it doesn't change the fact there is no way to deal with this at present. It would require a drastic rewrite of the existing code to make the locking fine grained enough to do this, or a new piece of software written from the ground up; both of which carry their own risks.

Couldn't this issue be worked around by pre-filtering the traffic coming into the bitcoin daemon? "Bad" transaction detection would need to be at the protocol level. The simplest fix would be rejecting transactions over a certain size. Of course that's imperfect, but the filtering could become more fine-grained and accurate over time. It might even be possible to do this with firewall rules?
This is a block and the transactions it contains we're talking about, not a simply broadcast transaction, and we don't want to start filtering possibly valid blocks...

No filtering required. When a potentially solved block arrives, spawn a thread to start validating it. When another potentially solved block arrives, spawn another thread to start validating it. First one to validate is the one you build your candidate for the next round atop.
You missed the point of my original response entirely then - you CAN'T spawn a thread to validate it because of the locking I said before. If you spawn a thread to validate the block, nothing else can do anything in the meantime anyway - you can't process transactions, you can't validate other blocks. This is, again, a limitation of the code rather than a protocol problem but it would take a massive rewrite to get around this.
legendary
Activity: 2156
Merit: 1072
Crypto is the separation of Power and State.
February 20, 2017, 03:09:56 PM
By the way, as far as I can understand the bitcoind code as I read it (and no doubt I will be corrected if I'm wrong which is fine), this is an attack vector that "choosing between them" is the least of your problems because it cannot process them both concurrently and then decide that (2) has finished long before it has finished processing (1). This means that if a (1) block hits even 1us before the (2), bitcoind will sit there processing it until it has finished before it can process (2). While this is purely a limitation of the software as it currently stands that it cannot process multiple blocks concurrently in a multithreaded fashion due to the coarse grained locking in the software, it doesn't change the fact there is no way to deal with this at present. It would require a drastic rewrite of the existing code to make the locking fine grained enough to do this, or a new piece of software written from the ground up; both of which carry their own risks.

Couldn't this issue be worked around by pre-filtering the traffic coming into the bitcoin daemon? "Bad" transaction detection would need to be at the protocol level. The simplest fix would be rejecting transactions over a certain size. Of course that's imperfect, but the filtering could become more fine-grained and accurate over time. It might even be possible to do this with firewall rules?
This is a block and the transactions it contains we're talking about, not a simply broadcast transaction, and we don't want to start filtering possibly valid blocks...

No filtering required. When a potentially solved block arrives, spawn a thread to start validating it. When another potentially solved block arrives, spawn another thread to start validating it. First one to validate is the one you build your candidate for the next round atop.

Gee it's all so simple.  I wish BlockstreamCore would get off their sensor ships and implement common sense solutions like the one you have described.

But since you're here to save Bitcoin, they don't need to.

You're going to make them obsolete with this amazing, novel "no filtering required" approach.

Now where is the link to your gibhub?  I'd like to try out the the fine grained locking implemented in your drastic rewrite of the existing code and test it for new risks.

Oh wait, what's that?  You don't code and are just spewing criticism from the development-free zone called your armchair?

Hold on...  You don't even understand how Bitcoin's tx flooding and block creation mechanisms interact.

But here you are presuming to tell Con Kolivas of all people how to make Bitcoin great again.



"pre-filter tx using firewall rules?"

OMG I CAN'T EVEN   Grin Grin Grin Grin Grin
legendary
Activity: 4214
Merit: 4458
February 20, 2017, 02:58:48 PM
We have two options:

1) centralized gold + second layer on top (Core(upstream filtered SEGWIT nodes) + LN)
2) decentralized gold + second layer on top (any node thats not biased + LN)

Pretty simple decision.

fixed that for you
legendary
Activity: 3024
Merit: 1640
lose: unfind ... loose: untight
February 20, 2017, 12:26:08 PM
By the way, as far as I can understand the bitcoind code as I read it (and no doubt I will be corrected if I'm wrong which is fine), this is an attack vector that "choosing between them" is the least of your problems because it cannot process them both concurrently and then decide that (2) has finished long before it has finished processing (1). This means that if a (1) block hits even 1us before the (2), bitcoind will sit there processing it until it has finished before it can process (2). While this is purely a limitation of the software as it currently stands that it cannot process multiple blocks concurrently in a multithreaded fashion due to the coarse grained locking in the software, it doesn't change the fact there is no way to deal with this at present. It would require a drastic rewrite of the existing code to make the locking fine grained enough to do this, or a new piece of software written from the ground up; both of which carry their own risks.

Couldn't this issue be worked around by pre-filtering the traffic coming into the bitcoin daemon? "Bad" transaction detection would need to be at the protocol level. The simplest fix would be rejecting transactions over a certain size. Of course that's imperfect, but the filtering could become more fine-grained and accurate over time. It might even be possible to do this with firewall rules?
This is a block and the transactions it contains we're talking about, not a simply broadcast transaction, and we don't want to start filtering possibly valid blocks...

No filtering required. When a potentially solved block arrives, spawn a thread to start validating it. When another potentially solved block arrives, spawn another thread to start validating it. First one to validate is the one you build your candidate for the next round atop.
legendary
Activity: 3556
Merit: 9692
#1 VIP Crypto Casino
February 20, 2017, 11:13:40 AM
Upgraded to 0.13.2 finally yesterday. Fuck BU Cheesy
legendary
Activity: 1204
Merit: 1028
February 20, 2017, 11:05:01 AM
We have two options:

1) Decentralized gold + second layer on top (Core + LN)
2) Centralized gold + second layer on top (BU + LN)

Pretty simple decision.
full member
Activity: 322
Merit: 151
They're tactical
February 20, 2017, 10:40:23 AM
why?
why even have "trusted nodes"..
all nodes should be on the same FULL validation playing field.
if you start introducing different layers, you starts introducing hierarchies and 'kings' and power grabbing..

all full nodes should all do the same job, because thats the purpose of the network. they all follow the same rules and all treat transactions the same.

It's more about having private agreement between nodes that is not necessarily based on the blockchain Wink

LN is a separate network than bitcoin.. the hint is in what the N stands for.
though LN grabs the tx data from bitcoins network. the "private agreement" is on a separate network managed by separate nodes (currently programmed in Go, not C++).

no point bastardising bitcoins network for LN when LN can remain its own separate offchain network that only needs to grab bitcoin data once every couple weeks (current mindset of channel life cycle).

LN should remain as just a voluntary second layer service, outside of bitcoins main utility.

It's why if anyway the alternative are being stuck with a slow inefficient consensus, or going a full private network where all the transaction will be shadowed, why not bastardizing a bit the way node works to deal with private processing of certain intermediate result.

Because anyway, as far as i know, LN is not going to solve much more than this, even if it's still better because it has true mechanism of confirmation, but as this mechanism of confirmation is still not completely as safe as pow, it still imply weakened security.

And if it's to be used as a private network of trusted node anyway, with no way to make sure it's completely in synch with the rest of the network, maybe it's not worst to make it more explicit and make mechanism to allow faster/cheaper transaction between trusted party outside of the memory pool, and only pushing the transactions to the memory pool when it's more optimal, and eventually reworking the whole transaction flow to make it more optimal at the moment it has to be mined. And keeping the intermediate operation only privately in the network.
legendary
Activity: 4214
Merit: 4458
February 20, 2017, 10:02:30 AM
why?
why even have "trusted nodes"..
all nodes should be on the same FULL validation playing field.
if you start introducing different layers, you starts introducing hierarchies and 'kings' and power grabbing..

all full nodes should all do the same job, because thats the purpose of the network. they all follow the same rules and all treat transactions the same.

It's more about having private agreement between nodes that is not necessarily based on the blockchain Wink

LN is a separate network than bitcoin.. the hint is in what the N stands for.
though LN grabs the tx data from bitcoins network. the "private agreement" is on a separate network managed by separate nodes (currently programmed in Go, not C++).

no point bastardising bitcoins network for LN when LN can remain its own separate offchain network that only needs to grab bitcoin data once every couple weeks (current mindset of channel life cycle).

LN should remain as just a voluntary second layer service, outside of bitcoins main utility.
full member
Activity: 322
Merit: 151
They're tactical
February 20, 2017, 09:48:22 AM
why?
why even have "trusted nodes"..
all nodes should be on the same FULL validation playing field.
if you start introducing different layers, you starts introducing hierarchies and 'kings' and power grabbing..

all full nodes should all do the same job, because thats the purpose of the network. they all follow the same rules and all treat transactions the same.

It's more about having private agreement between nodes that is not necessarily based on the blockchain Wink

Not saying this should be assumed as the norm, but when several node could reach off chain agreement on how the transaction flow is supposed to be timed on their side, it can still allow for optimization, if the intermediate results doesn't need to be seen on the whole network.

Or otherwise need  a better definition of transaction flow to allow decentralized optimization in the case it can make a difference, but it's also bloating the whole network for things that  can be kept private without making big security problem for the party involved.

i think you need to go spend some more time researching bitcoin. and start learning how to keep consensus of nodes.. not fragment it.

I guess i'm more like anakin skywalker  Grin I care about objective, result and timing. Consensus is too slow. You need to understand the real nature of the force  Cool

The consensus have to agree on the end result but they dont always need to know all the details :p

legendary
Activity: 4214
Merit: 4458
February 20, 2017, 09:29:00 AM
I was thinking about something in this same line, having the memory pool used as some kind of optimization layer above the block chain, to pre process certain transaction and spare them from the mining pool.


every node has their own mempool. and as long as a node is not deleting transactions for random non-rule reasons each node keeps pretty much the same transactions as other nodes... including the nodes of pools. the only real varient is if a node has only just been setup to not have been relayed the transactions other nodes have seen.

It's why another tx pool could be made that is only shared between trusting nodes, with it's own pre processing, like this useless or bad transaction are never even relayed to the mining nodes at all through the memory pool. But intermediate/temporary result can still be seen in those node. Even if they don't necessarily need to be confirmed or mined before a certain time.

why?
why even have "trusted nodes"..
all nodes should be on the same FULL validation playing field.
if you start introducing different layers, you starts introducing hierarchies and 'kings' and power grabbing..

all full nodes should all do the same job, because thats the purpose of the network. they all follow the same rules and all treat transactions the same.

i think you need to go spend some more time researching bitcoin. and start learning how to keep consensus of nodes.. not fragment it.
full member
Activity: 322
Merit: 151
They're tactical
February 20, 2017, 09:24:18 AM
I was thinking about something in this same line, having the memory pool used as some kind of optimization layer above the block chain, to pre process certain transaction and spare them from the mining pool.


every node has their own mempool. and as long as a node is not deleting transactions for random non-rule reasons each node keeps pretty much the same transactions as other nodes... including the nodes of pools. the only real varient is if a node has only just been setup to not have been relayed the transactions other nodes have seen.

It's why another tx pool could be made that is only shared between trusting nodes, with it's own pre processing, like this useless or bad transaction are never even relayed to the mining nodes at all through the memory pool. But intermediate/temporary result can still be seen in those node. Even if they don't necessarily need to be confirmed or mined before a certain time.
legendary
Activity: 4214
Merit: 4458
February 20, 2017, 09:14:06 AM
I was thinking about something in this same line, having the memory pool used as some kind of optimization layer above the block chain, to pre process certain transaction and spare them from the mining pool.


you do know there is no central mempool.

every node has their own mempool. and as long as a node is not deleting transactions for random non-rule reasons each node keeps pretty much the same transactions as other nodes... including the nodes of pools. the only real varient is if a node has only just been setup to not have been relayed the transactions other nodes have seen.

pools and nodes validate transactions as they are relayed to them. thus when a block is made there is less reason to re validate every transaction over again.(unless they are new to the network to have not seen the relayed transactions. or as has been hypothesised in this thread. where nodes are biasedly rejecting transactions for no rule breaking reason)

full member
Activity: 322
Merit: 151
They're tactical
February 20, 2017, 08:10:22 AM
Quote
however we should think about changing something real simple.
the tx priority formulae to actually solve things like bloat/respend spam. where by the more bloated (tx bytes) and the more frequent (tx age) a spend is made. the more it costs.

..

I was thinking about something in this same line, having the memory pool used as some kind of optimization layer above the block chain, to pre process certain transaction and spare them from the mining pool.

But in the same time, i'm thinking memory pool already have their specific purpose for the miners, and i'm not sure it's that easy to introduce more complex logic in the mem pool algorithm directly, and it could make its real purpose and functioning more complex.

But maybe something like a completely different transaction pool could be thought before the memory pool, where eventually all the data temporality is taken in account, and how often the input/output will change, it can do the operations in the cache in non confirmed manner, to save up mining fee and confirmation time on intermediate result, in case it's explicitly clear that the intermediate result are non confirmed and the party used this memory pool can accomodate with temporarily non confirmed tx ,and only push the tx in the actual mem pool when they are not going to be used in the cache for a little bit, or a new input from those tx enter the memory pool.

It could make sense if several node in a related commercial network would share such an alternative tx pool when there is high number of tx chain that can be easily verified because they all originate from the same trusted network. And it could probably save up a number of intermediate operation on the blockchain, without giving too much security problems. With the drawback that those transaction could only be visible in this network the time they are finally push to the main memory pool for mining. And they would still be 'temporary' transaction as long as they are not push to the main memory pool and mined.

That would not replace true blockchain confirmation when it's needed, but in certain case it could probably make some sense when data temporality can be predicted because lot of operations on this data happen in a trusted network subset.

Like taking a e-shop and a supplier, who would have enough mutual trust with each other, the customer would put orders on the transaction cache, but maybe the shop will only collect them at the end of the day, and they don't have to be mined instantly, but to be still on the network. And then maybe the supplier also will not collect the shop orders before a certain time too, and the tx from shop to the supplier don't need to be fully confirmed before a certain time. Or certain intermediate result can be totally skipped from the memory pool.

Or making a memory pool who can fully take in account data temporality with more marking to have better optimization on when the transaction really need to be mined. Or when some operation can be done and intermediate result skipped from the memory pool.

 But i'm not sure it's a good thing to do this directly in the main memory pool because not every body will necessarily agree, or this behavior should be completely optional and explicitly requested for certain transactions when it can make sense to optimize data temporality before the mining.

legendary
Activity: 4214
Merit: 4458
February 20, 2017, 06:41:41 AM
sigops
the way bitcoin is moving, most nodes would have pre-validated the transactions at relay. and all they need is the block header data to know which transactions belong to which block. so sigops are less of an issue at block propagation because the validation by most nodes is pre-done.

if we start not relaying transactions then all of a sudden nodes will start needing to request more TX data after block propagation because some tx's are not in a nodes mempool. we should not be rejecting tx's because it will slam nodes later, causing block propagation delays

however we should think about changing something real simple.
the tx priority formulae to actually solve things like bloat/respend spam. where by the more bloated (tx bytes) and the more frequent (tx age) a spend is made. the more it costs.

the old priority fee solved nothing. but was used to just make the richer tx's gain more priority but a small value tx left waiting weeks to get priority.

so lets envision a new priority formulae that actually has real benefit.

imagine that we decided its acceptable that people should have a way to get priority if they have a lean tx and signal that they only want to spend funds once a day. where if they want to spend more often costs rise, if they want bloated tx, costs rise.. which then allows those that just pay their rent once a month or buys groceries every couple days to be ok using onchain bitcoin.. and where the costs of trying to spam the network (every block) becomes expensive where by they would be better off using LN. (for things like faucet raiding every 5-10 minutes)

so lets think about a priority fee thats not about rich vs poor but about respend spam and bloat.

lets imagine we actually use the tx age combined with CLTV to signal the network that a user is willing to add some maturity time if their tx age is under a day, to signal they want it confirmed but allowing themselves to be locked out of spending for an average of 24 hours.

and where the bloat of the tx vs the blocksize has some impact too... rather than the old formulae with was more about the value of the tx

here is one example


as you can see its not about tx value. its about bloat and age.
this way
those not wanting to spend more than once a day and dont bloat the blocks get preferential treatment onchain.
if you are willing to wait a day but your taking up 1% of the blockspace. you pay more
if you want to be a spammer spending every block. you pay the price
and if you want to be a total ass-hat and be both bloated and respending often you pay the ultimate price

yes its not perfect. but atleast lets think about using CODE to choose whats acceptable. rather than playing bankers economic value games of rich guys always win, that way we are no longer pushing the third world countries out of using bitcoins mainnet.
Pages:
Jump to: