Pages:
Author

Topic: You Mad Bro? - page 4. (Read 3350 times)

legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
February 19, 2016, 02:25:31 PM
#33
sr. member
Activity: 392
Merit: 250
February 19, 2016, 02:13:20 PM
#32
Anyone wanting to fire Core devs to put in charge a bunch of guys that haven't demonstrated anything of value beyond getting a small % of noobs supporting them for +1mb of block size increase is crazy.

There is no firing in open source, only rage-quitting.

If Core would have listened to miners after Scaling HK in Dec... we wouldn't be having this little slap fight. Shame they didn't compromise earlier and spare us the drama.
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
February 19, 2016, 02:09:10 PM
#31
With a backlog like that, shouldn't every block be full all the time until the backlog is processed?
I have no idea why you would complain about a backlog of 4000 transactions. We've seen much bigger backlogs in the past and everything was fine? However this is interesting:
Quote
Total Fees   64.31924776 BTC

just wait till the backlog is much bigger

imagine all the fees we will squeeze out of all the suckers sending bitcoins to stamps/bittrex ASAP
legendary
Activity: 1204
Merit: 1028
February 19, 2016, 02:06:44 PM
#30
legendary
Activity: 4410
Merit: 4766
February 19, 2016, 02:04:45 PM
#29
Anyone wanting to fire Core devs to put in charge a bunch of guys that haven't demonstrated anything of value beyond getting a small % of noobs supporting them for +1mb of block size increase is crazy.

you know 2mb is not about the politics of gavin vs back.... anyone can put in the 2mb buffer.. INCLUDING core.. its not a special feature limited to only one person.

but for people like Back to avoid it and send out shills to make up excuses not to include it, is crazy
hero member
Activity: 709
Merit: 503
February 19, 2016, 02:04:29 PM
#28
With a backlog like that, shouldn't every block be full all the time until the backlog is processed?
I have no idea why you would complain about a backlog of 4000 transactions. We've seen much bigger backlogs in the past and everything was fine? However this is interesting:
Quote
Total Fees   64.31924776 BTC
Hmm, how big of a backlog does warrant attention?  Perhaps we should focus on confirmation times?
legendary
Activity: 1204
Merit: 1028
February 19, 2016, 02:02:03 PM
#27
Anyone wanting to fire Core devs to put in charge a bunch of guys that haven't demonstrated anything of value beyond getting a small % of noobs supporting them for +1mb of block size increase is crazy.
legendary
Activity: 2674
Merit: 2965
Terminated.
February 19, 2016, 02:00:32 PM
#26
With a backlog like that, shouldn't every block be full all the time until the backlog is processed?
I have no idea why you would complain about a backlog of 4000 transactions. We've seen much bigger backlogs in the past and everything was fine? However this is interesting:
Quote
Total Fees   64.31924776 BTC
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
February 19, 2016, 01:53:24 PM
#25
If Bitcoin fees climb high enough then other altcoins with lower fees will begin to see adoption.

its ok, in 2020 when lighting network is ready, everyone will come right back.
hero member
Activity: 709
Merit: 503
February 19, 2016, 01:51:28 PM
#24
If Bitcoin fees climb high enough then other altcoins with lower fees will begin to see adoption.
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
February 19, 2016, 01:44:47 PM
#23
Well I'm a Bitcoin newbie, but I'm getting pissed off by the stroppy 2Mb people who are trying to force an apparently simplistic solution that isn't needed yet. I have yet to see a compelling argument that provides a justification for an immediate blocksize increase.
That's because you don't use bitcoin.

Here's your argument:

https://blockchain.info/unconfirmed-transactions

https://chain.btc.com/en/block
With a backlog like that, shouldn't every block be full all the time until the backlog is processed?
it should be, but for some reasons some blocks come out empty

in anycase, everyblock could be full forever and the backlog won't get cleared

Core wants this backlog to go up an order of magnitude so they can test their high fees is good for bitcoin theory
hero member
Activity: 709
Merit: 503
February 19, 2016, 01:38:59 PM
#22
Well I'm a Bitcoin newbie, but I'm getting pissed off by the stroppy 2Mb people who are trying to force an apparently simplistic solution that isn't needed yet. I have yet to see a compelling argument that provides a justification for an immediate blocksize increase.
That's because you don't use bitcoin.

Here's your argument:

https://blockchain.info/unconfirmed-transactions

https://chain.btc.com/en/block
With a backlog like that, shouldn't every block be full all the time until the backlog is processed?
legendary
Activity: 2674
Merit: 2965
Terminated.
February 19, 2016, 01:34:10 PM
#21
Either Lauda doesn't understand... or is barefaced lying to misleading you. Immediately after a block is found, miners will work on an empty block based on the header of the last block while they verify it. After it is verified, they will know all the transactions it contains and can begin working on a block that is full of transactions. Blocks solved in a minute or less after the previous will often always be like this.
I didn't check if it was that kind of block. My initial thoughts were a standard empty or limited block. However it is nice to know that you'd insult one for making a mistake.

The is yet another useless propaganda post that does not help. The position of Classic is very viable and the wanted increase in transaction capacity is coming with 2MB max_blocksize increase.

If ~A is as compelling as A, A is not compelling (teh tilde test).
Nonsense, Segwit is much better. Classic is horrible, pretty much at an equal level as XT. I wonder who will quit the system after this fork fails.  

scalability and scare tactics should be left to the reserve banks and politicians.  Wink ... No need to get mad!
Propaganda and shills have been deployed.
legendary
Activity: 4410
Merit: 4766
February 19, 2016, 01:31:49 PM
#20
basically the only real argument against the 2mb thing vs segwit, is the hard fork aspect

it seems that last time we had one, people needed many month or years to sync with the other, they remained stuck with the older version....

even that is a non argument.
as the 2mb buffer only gets implemented if atleast a majority is reached.

but segwits softfork messes with blockchain data, which means some of the current fullnodes wont be fully verifying nodes anymore, and with no witness mode aswell, makes even more nodes no longer holding all the data to allow other nodes to leach off of. so it will affect more users who will need to upgrade, which is the same needing to upgrade for the 2mb..
though its a "softfork", by not upgrading your becoming a limp client... so its not a shiny and glossy pretty picture as blockstream make out

if core simple as the 2mb buffer in their April release then there wont be issues there is no 2mb or segwit.. but 2mb+segwit.. (2 birds one stone, no debate)
legendary
Activity: 3248
Merit: 1070
February 19, 2016, 01:23:42 PM
#19
basically the only real argument against the 2mb thing vs segwit, is the hard fork aspect

it seems that last time we had one, people needed many month or years to sync with the other, they remained stuck with the older version....
sr. member
Activity: 392
Merit: 250
February 19, 2016, 01:23:06 PM
#18
This is the latest block recorded in your link
399,189    AntPool
   1    208    25 + 0.00000000 BTC    3 minutes ago    0000000000000000066105de87b895cc3d0631838da51af0fa33edee1b61600e

That's 208 bytes in size. How will doubling the blocksize help when miners are submitting these.
It won't. If anything it will make things worse as more miners will want to use this advantage and submit smaller blocks.

Either Lauda doesn't understand... or is barefaced lying to misleading you. Immediately after a block is found, miners will work on an empty block based on the header of the last block while they verify it. After it is verified, they will know all the transactions it contains and can begin working on a block that is full of transactions. Blocks solved in a minute or less after the previous will often always be like this.

The creator of Bitcoin offered a solution to "regularly at the limit" blocks, all bedecked in elegant simplicity:

It can be phased in, like:

if (blocknumber > 115000)
    maxblocksize = largerlimit

It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete.

When we're near the cutoff block number, I can put an alert to old versions to make sure they know they have to upgrade.


Almost seems a no brainer while scenes like this have become commonplace:

legendary
Activity: 4410
Merit: 4766
February 19, 2016, 01:19:14 PM
#17
blockstreams lame rebuttles not to do 2mb

1. 2mb is too much data
debunk: segwits real world data can be 1.5mb-4mb

2. the processing time of handling more transactions is too much for average computers
debunk: 1mb works on a raspberry Pi, so 2mb can work even on a 2005 basic laptop as that is twice the capacity in every way imaginable.. further more we are not in 2005 we are in 2016 so average technology is even better than that

3. segwit offers a better scaling solution
debunk: for about a couple of months until other roadmap feature re-bloat up the data of transactions that segwit first saved.

4. if we implement 2mb the malleability issues will cause further problems.
debunk: if you include the 2mb buffer in the April release (when malleability is supposedly fixed) there wont be an issue

5. but blocks will be 2mb full constantly
debunk: no, miners wont risk losing reward by pushing twice the data out straight away, they will dip their toe in the water in small increments. just like 2013 when there was a 1mb buffer but miners were only making 0.5mb blocks. slowly incrementing as they found a comfortable pace to add more transactions, over years, not days.

6. but there is just no need for scaling.
debunk. well average blocks can only hold 2500tx.. and there is a mempool of over 7,000 growing, just sitting there as they are not getting added to blocks. so there is a backlog that needs to be sorted. its not like the 1mb buffer is sufficient because there are only 0.5mb sat in the mempool. theres many blocks at 0.99mb(at top capacity) and a mempool of 10mb backlog. the only reason blocks are made below capacity is not due to lack of transactions to process. but greed of some miners making empty blocks

7. but if there is no backlog, then people dont need to pay fee's as there is room for everyone
debunk: the block reward is sufficient payment for miners for a couple decades because of the deflationary fiat valuation of bitcoins keeps the valuation high enough.. so fees should have no part of the coding /logistics debate for a long time.

8. gavinsta, toomin, hearne, R3, blah blah blah.
debunk: the 2mb buffer can be incorporated into any client.. yep that includes Core, and the other dozen, so dont twist it into politics. as the code can be used by anyone. if your against the idea just because of who owns a particular client. then put the code into another client not owned by them
full member
Activity: 154
Merit: 100
February 19, 2016, 12:53:36 PM
#16
Well I'm a Bitcoin newbie, but

Lurk moar.

The is yet another useless propaganda post that does not help. The position of Core is very viable and the wanted increase in transaction capacity is coming with Segwit.

The is yet another useless propaganda post that does not help. The position of Classic is very viable and the wanted increase in transaction capacity is coming with 2MB max_blocksize increase.

If ~A is as compelling as A, A is not compelling (teh tilde test).
legendary
Activity: 2786
Merit: 1031
February 19, 2016, 12:51:13 PM
#15
Well I'm a Bitcoin newbie, but I'm getting pissed off by the stroppy 2Mb people who are trying to force an apparently simplistic solution that isn't needed yet. I have yet to see a compelling argument that provides a justification for an immediate blocksize increase.

That's because you don't use bitcoin.

Here's your argument:

https://blockchain.info/unconfirmed-transactions

https://chain.btc.com/en/block

This is the latest block recorded in your link
399,189    AntPool
   1    208    25 + 0.00000000 BTC    3 minutes ago    0000000000000000066105de87b895cc3d0631838da51af0fa33edee1b61600e

That's 208 bytes in size. How will doubling the blocksize help when miners are submitting these.

Dude, c'mon, learn a bit more...

That's actually an argument for bigger blocks, if one block comes empty the thousands of transactions in mempool stay in there a bit longer, and next block can only include 1MB, it's like a snowball, if transaction volume doesn't go down network becomes shit.

Bear in mind, 10 years of 2MB blocks = 1TB.

Anyway, damage is already done, block size increase should have happened like 3 months ago, more two months of bad user experience aren't going to do much more damage.
legendary
Activity: 1904
Merit: 1037
Trusted Bitcoiner
February 19, 2016, 12:39:10 PM
#14
unyielding 1MB position? ... Where are you getting that from...? Last time I checked they suggest many possible solutions for these problems. I

think their roadmap is clear enough and would scale block size as needed and when it is needed. The debate is healthy for this experiment and forced

scalability and scare tactics should be left to the reserve banks and politicians.  Wink ... No need to get mad!

you've been lied to  Tongue
Pages:
Jump to: