Pages:
Author

Topic: Bitcoin Scaling Solution Without Lightning Network... - page 3. (Read 1727 times)

legendary
Activity: 2898
Merit: 1823
@franky1
This is what we call being proactive and anticipation. The example you give about the SegWit roadmap from 2014 is one example. Are we forced to use SegWit? As DoomAD says, they cannot integrate everyone's wishes, but they anticipate to make Bitcoin usable with various convenients solutions. It's like complaining because someone is working to improve Bitcoin and talk about consensus. A consensus from the mass could turn in a 10 years old kid decision.

  No, we are not forced to use Segwit. However, if someone chooses not to use Segwit, you are penalized by paying higher fees. This may only amount to pennies at the moment, but it can add up. If BTC starts to get used even more, many casual users will then be compelled to use LN, to avoid prohibitive fees.

Or be forced to use Bitcoin Cash. I believe that was their idea of why they split from Bitcoin, right? But apparently, not that many people in the community believed that bigger blocks for scalability were a good trade-off on decentralization.

The social consensus remains "Bitcoin is Bitcoin Core".

Why should anyone be forced to settle for something which is less secure?  So far LN is still in alpha testing stage. Risk of loosing funds is too high ATM. Maybe when they improve their network, I'll want to use it. BCH has always had less hash rate and is therefore less secure. I think people should be able to utilize the most secure network out there in an affordable manner and not be forced to settle for some less secure stuff. Even if lightning network gets its act together, a second layer solution will be second best when it comes to security. So I guess the BTC blockchain will only be secure vip2vip cash. The riffraff can settle for less secure crap.  Cheesy

VIP2VIP cash? Bitcoin will remain an open system that anyone in the world can use. What is "VIP" with Bitcoin? Nothing.

Are the fees constantly high that it discourages everyone from using Bitcoin? I don't believe it is. The fees have been low since the increasing adoption of Segwit.

Plus about sharding. Franky1, do you agree that bigger blocks are inherently centralizing, and that "sharding" is just prolonging the issue instead of solving it?
legendary
Activity: 4396
Merit: 4755
shards don't.

i already looked months ago into sharding and played around ran scenarios. and like i said a few posts ago. once you wash away all the buzzwords it all just comes full circle

many sharding concepts exist.
some are:
master chain(single) where every 10blocks.. each block designated to a certain region/group
    - this way no group can mine 10 blocks in one go. but only get 1 block in. then have to wait 9 blocks before another chance

master node(multichain) where by there are multiple chains that swap value
    - i say master node. because although some sharding concepts pretend to not need them.
      inevitably. without a master node the regions/group nodes end up having to "trust" the other chain when sending utxo's

and many more concepts
issues are the "trust" of data if its not in one place becomes its own weakness
even LN devs have noticed this and realised that LN full nodes would need to be master nodes downloading and monitoring bitcoin, litecoin, vertcoin
seems some of the sharding devs of sharding projects have not yet seen the dilemma play out.
(5 weak points more prone to attack than 1 strong point.
EG
easier to 51% attack one of the 5 points of 5exahash than it is to 51% attack one point of 50exahash. thus if one of the 5 weak points gets hit.. damage is done.)

I do agree that using atomic swaps (with recent advancements in HTLC) and forks has something to do with scaling, the problem being price as a free variable. It would be interesting tho, having a solution for this problem.

no, atomic swaps and HTLC are BADDDDDDD. think of the UTXO set. (as atomic swaps is about 2 tokens and pegging)
as i originally said, better to double mine (bc1q->sc1) that way bitcoin sees the bc1q as spent and thus no more UTXO
thus no holding large UTXO set of locked unspents (the sc1just vanishes and not counted on btc's UTXO as btc cant spend a SC1....
but again the whole needing a master node to monitor both chains comes up and circles around.

so yea LN, sharding, sidechains, multichains.. they all end up back full circle of needing a "masternode"(essentially a full node)
that monitors everything.. which end up as the debate of if a master node exists. just call it a full node and get back to the route problem.

i could waffle on about all the weakenesses of the 'trust' of relying on separate nodes holding separate data, but ill try to keep my posts short

block time reduction ..  a moderate improvement in bitcoin parameters it is ways better than a comparable block size increase. They may look very similar but there is a huge difference: A reduction in block time helps with mining variance and supports small pools/farms. The technical difficulties involved are not big deals as everything could be adjusted easily, block reward, halving threshold, ...

nope
transactions are already in peoples mempool before a block is made.
nodes dont need to send a block with transactions again. they just send the blockheader.
this is why stats show transactions take 14 seconds but a block only takes 2 seconds. because a block header is small and the whole verification is just joining the header to the already obtained transactions.
again
all the nodes are looking for is the blockheader that links them together. the blockheader doesnt take much time at all.

because transactions are relayed (14 seconds) which is plenty of time within the 10minute window.
if that 10 minute window is reduced to 5minute. then thats less time for transactions to relay.
i say this not about the average txsigops of upto 500.. but in cases where a tx has 16000sigops which i warned about a few pages ago.
a 5 minute interval will be worse than a 10min interval
math: a 16k sigop tx will take 7 and ahalf mins to relay the network. meaning if a pools seen a tx first. relayed it out. and then started mining a 5 minute block.. solves it. and relays out the blockheader ..
the block header would have reached everyone in 5minutes 2 seconds. but a bloated transaction (degrees of separation) only reached 1000 nodes. as the last 'hop' from 1000 to their 10 nodes each to multiply it to the last lot has not had time to deal with the bloated tx

also its kinda why the 16k sigops limit exists to keep things under 10 minutes. but foolisly allow such a large amount that it doesnt keep tx's down to seconds.

yes a solution would be bring it down to a lower txsigoplimit when reducing thee blocktime.
which alone brings the
difficulty retarget to weekly. so discussions of moving it to 4032 blocks to bring it back to fornightly.
reward halving happening every 2 years. which means 21mill in less time unless to move it to 420,000 blocks for a 4 year half
and as i said 5minute interval which without reducing txsigop limit. will hurt propagation

so reducing block time is NOT simpler than just increasing block size.. thers alot of ripple effects of reducing blocktime than there is increasing blocksize.
also what do end users gain with a ~5min confirm.. its not like standing at a cashier desk waiting for a confirm is any less frustrating.. it would require a 2second confirm to make waiting in line at a cashier not be a frustration.
even 30 seconds seems the same eternity as 10 minutes when you see an old lady counting change

Quote
Right, but it adds up once you go to next and next hops. It is why we call it proximity premium. Bitcoin p2p network is not a complete graph, it gets like 10 times more for a block to be relayed to all miners. When you double or triple the number of txs, the proximity flaw gets worse just a bit less than two or three times respectively.
i kinda explained it a few paragraphs ago in this post

Quote
No disputes. I just have to mention it is infeasible to engineer the p2p network artificially and AFAIK current bitcoin networking layer allows nodes to drop slow/unresponsive peers and if you could figure out an algorithm to help with a more optimized topology, it would be highly appreciated.

you have kinda self found your solution..
have 15 potential nodes and pick the best 10 with the best ping/speed. naturally the network finds its own placement where the slower nodes are on outter rings and faster ones are at the center
legendary
Activity: 1456
Merit: 1175
Always remember the cause!
last point raise by aliashraf about my idea of using spv
it was not to actually be spv. it was just to use the spv mechanism for the first time load screen of fresh users. and then after 10seconds as a secondary be the fullnode.

..
think of it this way.
would you rather download a 300gb game via a torrent wait hours and then play.
or
download a small freeroam level that while you play its downloading the entire game via torrents in the background.

my idea was not to just (analogy) download a free roam level and thats it.
it was to just use the SPV mechanism for the first loading screen to make the node useful in the first 10seconds so that while the node is then downloading the entire blockchain people can atleast do something while they wait instead of frustrating themselves waiting for the sync

As you may have noticed, I meritted this idea of yours and as you know, I've a lot to add here. Most importantly, a better idea than getting UTXO via a torrent download which implies using trust (you need the hash) and being subject to sybil attacks, could be implementing it in bitcoin as what I've described in this post.
legendary
Activity: 1456
Merit: 1175
Always remember the cause!
franky,
Splitting is far different from forking. Forks inherit the full history and the state, shards don't. @mehanikalk has done good job on a similar idea as OP and his topic Blockreduce ... is trending (in this subforum measures) too. In both topics we are dealing with sharding, neither forks nor side-chains.

I do agree that using atomic swaps (with recent advancements in HTLC) and forks has something to do with scaling, the problem being price as a free variable. It would be interesting tho, having a solution for this problem.

Back to your recent post:
the other thing about reducing blocktime. (facepalm) reducing blocktime has these issues:
1. reduces the interval of 10mins for the whole propagation things you highlight as an issue later in your post
2. its not just mining blocks in 5 minutes. its having to change the reward. difficulty. and also the timing of the reward halvening
3. changing these affect the estimate of when all 21mill coins are mined.(year ~2140)
I'm not offering block time reduction as an ultimate scaling solution, of course it is not. I'm just saying for a moderate improvement in bitcoin parameters it is ways better than a comparable block size increase. They may look very similar but there is a huge difference: A reduction in block time helps with mining variance and supports small pools/farms. The technical difficulties involved are not big deals as everything could be adjusted easily, block reward, halving threshold, ...

Quote
as for propagation. if you actually time how long it takes to propagate it actually is fast, its only a couple seconds
this is because at transaction relay. its  about 14 seconds for transactions to get around 90% of the network, validated and set into mempool. as for a solved block. because fullnodes already have the (majority) of transactions in their mempool already they just need block header data and list of tx, not the tx data. and then just ensure all the numbers(hashes) add up. which is just 2 seconds
Right, but it adds up once you go to next and next hops. It is why we call it proximity premium. Bitcoin p2p network is not a complete graph, it gets like 10 times more for a block to be relayed to all miners. When you double or triple the number of txs, the proximity flaw gets worse just a bit less than two or three times respectively.

Quote
having home users of 0.5mb internet trying to connect to 100 nodes is causing a bottleneck for those 100 nodes.as they are only getting data streaming at 0.005mb (0.5/100)
...
yes of course. for independance have home users be full nodes but the network topology should be that slow home users be on the last 'hop' of the relay. not at the beginning/middle.
No disputes. I just have to mention it is infeasible to engineer the p2p network artificially and AFAIK current bitcoin networking layer allows nodes to drop slow/unresponsive peers and if you could figure out an algorithm to help with a more optimized topology, it would be highly appreciated.

On the other hand, I think partitioning/sharding is a more promising solution for most of these issues. Personally I believe in sharding of state (UTXO) which is a very challenging strategy as it resides on edges of forking.
legendary
Activity: 4396
Merit: 4755
last point raise by aliashraf about my idea of using spv
it was not to actually be spv. it was just to use the spv mechanism for the first time load screen of fresh users. and then after 10seconds as a secondary be the fullnode.

..
think of it this way.
would you rather download a 300gb game via a torrent wait hours and then play.
or
download a small freeroam level that while you play its downloading the entire game via torrents in the background.

my idea was not to just (analogy) download a free roam level and thats it.
it was to just use the SPV mechanism for the first loading screen to make the node useful in the first 10seconds so that while the node is then downloading the entire blockchain people can atleast do something while they wait instead of frustrating themselves waiting for the sync
legendary
Activity: 4396
Merit: 4755
the topic creator is talking about splitting the population/data in half

to split the block data in half is to have each half have traceability. thus. basically 2 chains.
yea you split the population in half. but the community tried that with all the forks
(i should have explained this in earlier post. my methodology is working backwards)

with all that said. its not just a fork the coin but make it atomically swappable.

the other thing the topic creator has not thought about is not just how to atomically swap but
also the mining is split across 2 chains instead of 1. thus weakening them both instead of having just 1 strong chain

its also that to ensure both chains comply with each other. a new "master"/"super" node has to be created that monitors both chains fully. which ends up back wher things started but this time the master node is juggling two datachain lines instead of one.
.
so now we have a new FULL NODE of 2 data chains.
a sub layer of lighter nodes that only work as full nodes for a particular chain..

and then we end up discussing the same issues with the new (master(full node)) in relation to data storage, propagation, validation.. like i said full circle, which is where instead of splitting the network/population in half,,
which eventually is just weakening the network data. from the new node layer its not changing the original problem of the full node(now masternode)

(LN for instance wants to be a master node monitoring coins like bitcoin and litecoin and vert coin and all other coins that are LN compatible)

which is why my methodolgy is backwards because i ran through some theoretical scenarios. skipped through the topic creators idea and went full circle back to addressing the full node(masternode) issues

wich is why if your going to have masternodes that do the heavy work. you might as well just skip weaking the data by splitting it and just call a masternode a full node. after all thats how it plays out when you run through the scenarios
legendary
Activity: 4396
Merit: 4755
The problem we are discussing here is scaling and the framework op has proposed is kinda hierarchical partitioning/sharding. I am afraid instead of contributing to this framework, sometimes you write about side chains and now you are denying the problem as being relevant completely. Considering what you are saying, there is no scaling problem at all!
the topic creator is proposing having essentially 2 chains.  then 4 chains then 8 chains.

we already have that, ever since clams split and then every other fork

the only difference the OP is saying is that the forks still communicate and atomic swap coins between each other..
the reason i digressed into sidechains is about the fact that without going into buzzwords. having 2 chains that atomic swap is when simplifying it down to average joe experience.. exactly the same on-offramp experience of sidechains.

i just made a simple solution to make it easily visable which "node-set"(chain) is holding which value (bc1q or sc1) without having to lock:peg value to one nodeset(chain) to peg:create fresh coin in another node-set(chain).

because pegging(locking) is bad.. for these reasons
it raises the UTXO set because coins are not treated as spent
the locks cause the coins in UTXO are out of circulation but still need to be kept in UTXO
the fresh coin of a sidechain dont have traceability back to a coinbase(blockreward)

...
the other thing is bitcoin is one chain.. and splitting the chain is not new (as my second sentence in this post highlighted)
...
the other thing about reducing blocktime. (facepalm) reducing blocktime has these issues:
1. reduces the interval of 10mins for the whole propagation things you highlight as an issue later in your post
2. its not just mining blocks in 5 minutes. its having to change the reward. difficulty. and also the timing of the reward halvening
3. changing these affect the estimate of when all 21mill coins are mined.(year ~2140)
...
as for propagation. if you actually time how long it takes to propagate it actually is fast, its only a couple seconds
this is because at transaction relay. its  about 14 seconds for transactions to get around 90% of the network, validated and set into mempool. as for a solved block. because fullnodes already have the (majority) of transactions in their mempool already they just need block header data and list of tx, not the tx data. and then just ensure all the numbers(hashes) add up. which is just 2 seconds
....
having home users of 0.5mb internet trying to connect to 100 nodes is causing a bottleneck for those 100 nodes.as they are only getting data streaming at 0.005mb (0.5/100)
where as having a home user of 0.5mb internet with just 10 connections is a 0.05 data stream speed

imagine it your a business NEEDING to monitor millions of transactions because they are your customers.
you have fibre.. great. you set your node to only connect to 10 connections. but find those 10 connections are home users who they are connecting to 100 nodes. you end up only getting data streamed to you at a combined 0.05mb.(bad)

but if those home users also decided to only connect to 10 nodes youll get data streams at 0.5mb
thats 10x faster

if you do the 'degrees of separation' math for a network of 10k nodes and say 1.5mb of blockdata
the good propagation: 10 connection(0.5mb combined stream)
  10  *  10   *   10   *   10=10,000
3sec + 3sec + 3sec + 3sec=12seconds for network

the bad propagation: 100 connection(0.05mb combined stream)
  100  *   100   =10,000
30sec + 30sec  =1 minute

alot of people think that connecting to as many nodes as possible is good. when infact it is bad.
the point i am making is:
home users dont need to be making 120 connections to nodes to "help the network". because that infact is causing a bottleneck

also sending out 1.5mb of data to 100 nodes instead of just 10 nodes is a waste of bandwidth for home users.
also if a home user only has bottomline 3g/0.5mb internet speeds as oppose to fibre. those users are limiting the fibre users that have 50mb.. to only get data at 0.5mb due to the slow speed of the sender.

so the network is better to be centralised by
10000 business fibre users who NEED to monitor millions of transactions
rather than
10000 home users who just need to monitor 2 addresses

yes of course. for independance have home users be full nodes but the network topology should be that slow home users be on the last 'hop' of the relay. not at the beginning/middle.
legendary
Activity: 1456
Merit: 1175
Always remember the cause!
5. we are not in the pre-millenia era of floppy disks. we are in the era where:
256gb is a fingernail size not server size.
4tb hard drives are the cost of a grocery shop not a lifetime pension.
4tb hard drives evn for 20mb blocks would be the average life cycle of a pc anyway if all blocks were filled
internet is not dialup, its fibre(landline), its 5g(cellular)
Although I like the tone I have to remind you of a somewhat bitter fact: None of these would help with scaling bitcoin, like definitively. It is good news that Moore law is still working (somehow) but the problem is not about the resources, it is the propagation delay of blocks because of the time it takes to fully validate transactions they are committing to. Unfortunately, propagation delay does not improve by Moore law.

That said, I'm ok with a moderate improvement in current numbers (by decreasing block time rather than increasing block size which are just the same in this context) but it won't be a scaling solution as it couldn't be used frequently because of proximity premium problem in mining. Larger pools/farms would have a premium once they hit a block as they are able to start mining next block while their poor competitors are busy validating the newborn and relaying it (they have to do both if they don't want to be on an orphan chain).

Many people are confused about this issue, even Gavin were confused about it, I read an article from him arguing about how cheap and affordable is a multi-terabyte HD. It is not about HDs neither about internet connectivity or bandwidth, it is about the number of transactions that need validation and the delayed progress of blocks and the resulting centralization threats.

Quote
if your on a capped internet then your not a business, as your on a home/residance internet plan
if your not a business then you are not NEEDING to validate and monitor millions of transactions
Home/non-business full nodes are critical parts of bitcoin ecosystem and our job is strengthening them by making it more feasible for them to stay and grow in numbers considerably.

Quote
now. the main gripe of blocksize
its not actually the blocksize. its the time it takes to initially sync peoples nodes.
now why are people angry about that.
simple. they cannot see the balance of their imported wallet until after its synced.
Good point but not the most important issue with block size.

Quote
solution
spv/bloom filter utxo data of imported addresses first. and then sync second
that way people see balances first and can transact and the whole syncing time becomes a background thing no one realises is happening because they are able to transact within seconds of downloading and running the app.
i find it funny how the most resource heavy task of a certain brand of node is done first. when it just causes frustrations.
after all if people bloomfilter important addresses and then make a tx.. if those funds actually are not spendable due to receiving bad data from nodes.. the tx wont get relayed by the relay network.
Recently, I have proposed a solution for fast sync and getting rid of the history but surprisingly I did it to abandon spvs(well, besides other objectives). I hate spvs, they are vulnerable and they add zero value to network, they just consume and give nothing because they don't validate blocks.

The problem we are discussing here is scaling and the framework op has proposed is kinda hierarchical partitioning/sharding. I am afraid instead of contributing to this framework, sometimes you write about side chains and now you are denying the problem as being relevant completely. Considering what you are saying, there is no scaling problem at all!

legendary
Activity: 4396
Merit: 4755
5. we are not in the pre-millenia era of floppy disks. we are in the era where:
256gb is a fingernail size not server size.
4tb hard drives are the cost of a grocery shop not a lifetime pension.
4tb hard drives evn for 20mb blocks would be the average life cycle of a pc anyway if all blocks were filled
internet is not dialup, its fibre(landline), its 5g(cellular)
if your on a capped internet then your not a business, as your on a home/residance internet plan
if your not a business then you are not NEEDING to validate and monitor millions of transactions

if you think bandwidth usage is too high then simply dont connect to 120 nodes. just connect to 8 nodes

..
now. the main gripe of blocksize
its not actually the blocksize. its the time it takes to initially sync peoples nodes.
now why are people angry about that.
simple. they cannot see the balance of their imported wallet until after its synced.

solution
spv/bloom filter utxo data of imported addresses first. and then sync second
that way people see balances first and can transact and the whole syncing time becomes a background thing no one realises is happening because they are able to transact within seconds of downloading and running the app.
i find it funny how the most resource heavy task of a certain brand of node is done first. when it just causes frustrations.
after all if people bloomfilter important addresses and then make a tx.. if those funds actually are not spendable due to receiving bad data from nodes.. the tx wont get relayed by the relay network.
in short
you cannot spend what you do not have
all it requires is a bloomfilter of imported addresses first. list the balance as 'independently unverified' and then do the sync in the background. once synced. the "independently unverified" tag vanishes
simple. people are no longer waiting for hours just to spend their coin.
legendary
Activity: 4396
Merit: 4755
The OP was right about increasing of bitcoin blocksize to also be one of the solution to bitcoin scaling because big block size promote more nodes but we also have to put into consideration the side effect of the block increasing which I presume could lead to  the 51% attacks and if Lightning does not which I believe it will another solution will arouse.

51% attack will not be caused by larger blocks.

here is why
1. ASICS do not touch the collated TX data. asics are handed a hash and told to make a second hash that meets a threshold.
it does not matter if the unmined blockhash is an identifier of 1kb of block tx data or exabytes of tx data. the hash remains the same length.
the work done by asics has no bearing on how much tx data is involved.

2. the verifying of transactions is so fast its measured in nano/miliseconds not seconds/minutes. devs know verification times are of no inconvenience which is why they are happy to let people use smart contracts instead of straight forward transactions. if smart contracts/complex sigops inconvenienced block verification efficiencies they would not add them (well moral devs wouldnt(dont reply/poke to defend devs as thats missing the point. relax have a coffee))

they are happy to add new smart features as the sigops is a combined few seconds max compared to the ~10min interval

3. again if bloated tx's do become a problem. easy, reduce the txsigops. or remove the opcode of features that allows such massive delays

4. the collating of txdata is handled before a confirmed/mined hash is solved. while ASICS are hashing a previous block, nodes are already verifying and storing transactions in mempool for the next block. it takes seconds while they are given upto 10 minutes. so no worries.
pools specifically are already collating transactions from the mempool into a new block ready to add a mined hash to it when solved to form the chain link. thus when a block solution is found:
if its their lucky day and they found the solution first. boom. miliseconds they hand the ASICS the next block identifier
if its a competitors block. within seconds they know its valid or not
it only takes a second to collate a list of unconfirmed tx's to make the next block ID to give to asics.
try it. find an MP3(4mb) on your home computer and move if from one folder to another. you will notice it took less time than reading this sentance. remember transactions in the mempool that get collated into a block to get a block identifier had already been verified during the previous slot of time so its just a case of collating data that the competitor hasnt collated
hero member
Activity: 1834
Merit: 566
The OP was right about increasing of bitcoin blocksize to also be one of the solution to bitcoin scaling because big block size promote more nodes but we also have to put into consideration the side effect of the block increasing which I presume could lead to  the 51% attacks and if Lightning does not which I believe it will another solution will arouse.

legendary
Activity: 1806
Merit: 1828
@franky1
This is what we call being proactive and anticipation. The example you give about the SegWit roadmap from 2014 is one example. Are we forced to use SegWit? As DoomAD says, they cannot integrate everyone's wishes, but they anticipate to make Bitcoin usable with various convenients solutions. It's like complaining because someone is working to improve Bitcoin and talk about consensus. A consensus from the mass could turn in a 10 years old kid decision.

  No, we are not forced to use Segwit. However, if someone chooses not to use Segwit, you are penalized by paying higher fees. This may only amount to pennies at the moment, but it can add up. If BTC starts to get used even more, many casual users will then be compelled to use LN, to avoid prohibitive fees.

Or be forced to use Bitcoin Cash. I believe that was their idea of why they split from Bitcoin, right? But apparently, not that many people in the community believed that bigger blocks for scalability were a good trade-off on decentralization.

The social consensus remains "Bitcoin is Bitcoin Core".

Why should anyone be forced to settle for something which is less secure?  So far LN is still in alpha testing stage. Risk of loosing funds is too high ATM. Maybe when they improve their network, I'll want to use it. BCH has always had less hash rate and is therefore less secure. I think people should be able to utilize the most secure network out there in an affordable manner and not be forced to settle for some less secure stuff. Even if lightning network gets its act together, a second layer solution will be second best when it comes to security. So I guess the BTC blockchain will only be secure vip2vip cash. The riffraff can settle for less secure crap.  Cheesy
legendary
Activity: 2898
Merit: 1823
@franky1
This is what we call being proactive and anticipation. The example you give about the SegWit roadmap from 2014 is one example. Are we forced to use SegWit? As DoomAD says, they cannot integrate everyone's wishes, but they anticipate to make Bitcoin usable with various convenients solutions. It's like complaining because someone is working to improve Bitcoin and talk about consensus. A consensus from the mass could turn in a 10 years old kid decision.

  No, we are not forced to use Segwit. However, if someone chooses not to use Segwit, you are penalized by paying higher fees. This may only amount to pennies at the moment, but it can add up. If BTC starts to get used even more, many casual users will then be compelled to use LN, to avoid prohibitive fees.

Or be forced to use Bitcoin Cash. I believe that was their idea of why they split from Bitcoin, right? But apparently, not that many people in the community believed that bigger blocks for scalability were a good trade-off on decentralization.

The social consensus remains "Bitcoin is Bitcoin Core".
legendary
Activity: 4396
Merit: 4755
hurray. back on topic.. hopefully we can stay on topic.

franky1 & DooMAD, both of you starting going off-topic again

bc1q ->bc1q that has a lock. can have openings for abuse based on timing and also loss of key for the bc1q address.
where as moving funds to a sc1 address is obsolving loss/risk from the mainnet as value is not in a bc1q address no more(as its spent). and moving the value with the transaction to the sidechain.
(thus solves the UTXO issue on mainnet of not having to hold 'locked' value)


This is interesting idea and oddly it has similarity with proposal Superspace: Scaling Bitcoin Beyond SegWit on part moving between main-chain and side-chain.
But thinking about UI/UX, introducing another address format is confusing for most user. Even 1..,3... and bc1... are plenty confusing.

its not that difficult. its like if you never intend to use a side chain. you wont have to worry. because you wont get funds from a SC1 address. and never need to send to a sc1 address

as for the UI. well again a UI can be designed to have an option eg

File   Options
          display segwit features
          display sidechain features

if you dont want it. you dont select it / dont realise it exists as the UI wont display features.
again you wont get funds from a SC1 or send to an SC1 unless you want it. so easy for average joe


but yea it will help the UTXO set stay down
unlike some sidechain concepts and definitely unlike LN (as locks means keeping the funds as UTXO for a locked time(facepalm)



.. anyway.. superspace project... (hmm seems they missed a few things and got a few details misrepresented) but anyway

the specific No_op code used to make segwit backward compatible cant be used again.
this is why

imagine a transaction in bytes, where a certain byte was a option list($)
***********$*******************
in legacy nodes
if $is: (list in laymans eli-5 so dont knitpick)
     0= ignore anything after (treat as empty, meaning no recipient meaning anyone can spend)(no_op)
     1= do A
     2=do B

in segwit nodes the 0 option was changed to become do segwit checks
they also added a few other opcodes too. as a sublist
so now with segwit being the active fullnodes. there is no 0='ignore anything after' at that particular $ byte
as its now
EG
***********$%******************
if $is: (list in laymans eli-5 so dont knitpick)
     0= do segwit if %is: (list in laymans eli-5 so dont knitpick)
                            0= ignore anything after (meaning anyone can spend)(no_op)
                            1= ignore anything after (meaning anyone can spend)(no_op)
                            ....
                            11= do A
                            12=do B
     1= do A
     2=do B
theres actually more No_ops now for segwit(%)

so if someone was to want to do what segwit did. they would first need to find a new no_op thats not been used.
and then they would need to ensure pools didnt treat it as a no_op at activation. (yep not really as soft as made out)
which would be another 2016-2017 drama event.

what the link does not explain is that the summer 2017 was actually a hard fork as nodes that would reject segwit needed to be thrown off the network. and pools needed to treat the no_op as not a 'anyonecanspend'

which means another hard fork would be needed. (hopefully not mandated this time)
legendary
Activity: 4396
Merit: 4755
again another offtopic poke from that certain person.. one last bite


i make a point. and then you say i am missing and deflecting my point

thats like me speaking english. you speak german. i say a point about english and you get upset and then waffle on that my point is about german and how im missing a german point.

ill make it real clear for you. although there are dozens of topics that repeat the word enough

mandatory mandatory mandatory

you cannot rebut the mandatory. so you are deflecting it.

they had segwit planned back in 2014 and had to get it activated ($100m was at stake)
no matter what the community done/said/want/didnt want. they needed it activated THEIR WAY
they didnt get their way 2016-spring 2017
so they resorted to mandatory activation

my point is about mandatory.
i should know my point. because im the one making it.

point is: mandatory

if you want to argue against my point then you need to address the point im making.

again
for the dozenth topic you have meandered off topic with your pokes. my point has been about the MANDATORY

if you cannot talk about the mandatory not being decentralised.. then atleast hit the ignore button.

as for the whole no community permission.. re-read your own post i gave you merit on. and see your flip flop

as for your deflection of the writing code. its not that they just write code they want. its that they avoid community code/involvement. as it doesnt fit their internal circles PLAN they had as far back as 2014..

yea anyone can write code.. but making it mandatory.. no. as thats anti concensus

also i said if they actually listened to the community and went with the late 2015 consensus agreement of a early varient of segwit2x they would have got segwit activated sooner. and the community would have had legacy benefits too.

but again they mandated only their pre existing plan which is what caused such delays /drama and still causing drama today as we are still discussing scaling even now 3 years later.
legendary
Activity: 1456
Merit: 1175
Always remember the cause!
@franky1, @doomad

Let it go guys. You are arguing too much about devs. Who cares about devs? Devs come and go, bitcoin stays and right now it needs your productive technical contribution.

Once there is a brilliant idea it will find its way to history. I don't care about politics involved, in a worst case scenario, if a group of devs could be proved to resist the truth too much, I'll personally help abandoning them.
legendary
Activity: 3934
Merit: 3190
Leave no FUD unchallenged
here we go again  you poke, i bite.
shame you are missing the point of decentralisation

Shame you are missing the point of permissionless.

And again, you would only make Bitcoin more centralised if the community had to approve code before anyone could write it.  Don't dodge the argument by telling me I'm missing the point when you're deliberately evading the point.  You can't insist on a handicap for one dev team and then claim you want a level playing field.  It's already level, because anyone can code what they want.  Clearly what you want is an un-level playing field where the dev team you don't like have restrictions on what they can do, but everyone else is free to do whatever.  In the past, others have demanded the same un-level playing field, except stacked against alternative clients.  They argued (wrongly) that the developers of alternative clients needed permission from the community to publish the code they did.  I defended the alternative clients. 

How can I be the one missing the point of decentralisation when my argument defends the right of everyone to code what they want?  That means we get multiple clients.  You're the one arguing that developers need to have permission from the community to code stuff and alternative clients would simply not get that permission.  That means we would only get ONE client (and it wouldn't be the one you want).  You should be agreeing with me on this, not fighting me. 


do you ever wonder why i just publicly give out idea's and let people decide yay or nah. rather than keep idea's in secret and make code and then demand adoption. again before trying to say im demanding anything. show me a line of code i made that had a mandatory deadline that would take people off the network if not adopted.
.. you wont. there is no need for your finger pointing that im an authoritarian demanding rule changes. because there is no demanding rule changes made by me

You're demanding a change in the way developers act.  You don't have any code to show because it isn't possible for code to achieve what you're demanding. 


emphasis.. MANDATE without community ability to veto

You're using your veto right now by running a non-Core client.  If enough people did that, consensus would change.  The problem you appear to be having is that most people on the network have no desire to use their veto.  They don't want consensus to change. 

Cue Franky1 deflecting from all of these points instead of countering them in 3... 2...
legendary
Activity: 4396
Merit: 4755
here we go again  you poke, i bite.
shame you are missing the point of decentralisation

they had segwit roadmap plan from 2014. before community input
they had code before community got to download.
Which means someone made a compelling argument about the idea and most of the developers in that team agreed with it.  Ideas can come from anywhere, including from developers themselves.  Saying that developers shouldn't work on an idea just because a developer proposed it isn't a mature or realistic stance.

^ their internal circle agreed. before letting the community have a say
i guess you missed the 2014-5 drama.

v not letting the community be involved is prime example of centralisation
 
I don't know where you get this perverse notion that developers need permission from the community before they are allowed to code something.

do you ever wonder why i just publicly give out idea's and let people decide yay or nah. rather than keep idea's in secret and make code and then demand adoption. again before trying to say im demanding anything. show me a line of code i made that had a mandatory deadline that would take people off the network if not adopted.
.. you wont. there is no need for your finger pointing that im an authoritarian demanding rule changes. because there is no demanding rule changes made by me

i find it funny that you flip flop about community involvement.
my issue is that they plan a roadmap. code a roadmap. release it and even if rejected, they mandate it into force anyway

emphasis.. MANDATE without community ability to veto

again the point your missing
having code that allows community vote/veto (2016, good)
having code that mandates activation without vote/veto (2017, bad)

you do realise that core could have had segwit activate by christmas 2016 if they just actually went with the 2015 consensus(early variant of segwit2x). which was a consensus compromise of the wider community finding agreement
which gave legacy benefits too.
but by avoiding it. and causing drama all the way through 2016 of how they want it only thier way(segwitx1).. pretending they couldnt code it any other way
they still didnt get a fair true consensus vote in their favour in spring 2017. so had to resort to the madatory activation and swayed the community is (fake) option of segwit2x(nya) just to then backtrack to segwit1x once they got the segwit part activated
legendary
Activity: 3934
Merit: 3190
Leave no FUD unchallenged
they had segwit roadmap plan from 2014. before community input
they had code before community got to download.

Which means someone made a compelling argument about the idea and most of the developers in that team agreed with it.  Ideas can come from anywhere, including from developers themselves.  Saying that developers shouldn't work on an idea just because a developer proposed it isn't a mature or realistic stance.

I don't know where you get this perverse notion that developers need permission from the community before they are allowed to code something.  And crucially, if you start making the argument that it should work that way, then you will totally destroy any opportunity for alternative clients to exist.  Would the community have given the green light to the developers of that client you're running right now?  I find that pretty doubtful.  You think "REKT" is bad?  See how much you complain if no one was even allowed to code anything unless the community gave their blessing first.  That's how you ruin decentralisation.

Be careful what you wish for.  You really aren't thinking this through to conclusion.
legendary
Activity: 4396
Merit: 4755
anyway, back on topic.

the scaling onchain
reduce how much sig-op control one person can have is a big deal.
i would say the sigops limits alone can be abused more so than the fee war to victimise other users. and needs addressing

as for transactions per block. like i said (only reminding to get back ontopic) removing the witness scale factor and the wishy washy code to realign the block structure into a single block that doesnt need stripping is easy. as the legacy nodes are not full nodes anyway

but this can only be done by devs actually writing code. other teams have tried but found themselves relegated downstream as "compatible" or rejected off the network. so the centralisation of devs needs to change
(distributed nodes does not mean decentralised rule control.. we need decentralised not distributed)

as for other suggestions of scaling.
others have said sidechains. the main issue is the on-off ramp between the two

a alternative concept could be whereby a new transaction format. (imagine bc1q.. but instead SC1) which has no lock
bitcoin network sees a
bc1q->SC1 as a go to side chain tx (mined by both chains)
and
SC1->bc1q as a return to main net(mined by both chains)

mainnet will not relay or collate(mine to blocks) any sc1 -> sc1 transactions. (hense no need to lock)
sidechain will not relay or collate(mine to block) any bc1q -> bc1q transactions. (hense no need to lock)

this way it avoids a situation of "pegging" such as
bc1q->bc1q(lock)                                sc1(create)->sc1

having bc1q->sc1 is not about pegging a new token into creation.
its about taking the transaction of mainchain. mining it also into a sidechain. and then only able to move sc1 addresss->sc1 the sidechain until its put back into a bc1q address which are then only able to move on mainnet

i say this because having a
bc1q ->bc1q that has a lock. can have openings for abuse based on timing and also loss of key for the bc1q address.
where as moving funds to a sc1 address is obsolving loss/risk from the mainnet as value is not in a bc1q address no more(as its spent). and moving the value with the transaction to the sidechain.
(thus solves the UTXO issue on mainnet of not having to hold 'locked' value)

allowing value to flow without time lock allows the auditing of funds to show its still one value moving. instead a locked value in main chain and new value in sidechain

i do have issues and reservations about sidechains too but the original "pegging" concept of sidechains really was bad and open to abuse. (not having the BTC side seen as "spent" while spending was actually happening)
Pages:
Jump to: