Pages:
Author

Topic: Do you think "iamnotback" really has the" Bitcoin killer"? - page 21. (Read 79971 times)

hero member
Activity: 770
Merit: 629
Yes, this is with block rewards constant.  Tail emission.  But in the case of rewards proportional to block length (fees), you have to multiply A's revenues with the fact that his blocks bring in more money.  He has a lower percentage of blocks on the chain, but these blocks bring him more rewards as they are bigger.

So if his big blocks bring him 20% more income per block, this is neutral.

Well, I'm not sure if we are on the same page.

Of course, you can include more transactions and collect more fees by building bigger blocks, but that doesn't solve the fundemental problem that A's fee revenues must be multipled by his blockchain production (fraction of the chain built by A), rather than by his rate of successful blocks (ratio of non-orphaned blocks).

Let me come back to my example of the three miners A, B and C, all with a hashrate of 1/3 and an orphan rate of 0.01.

Now, assume that A and B stick to a block size of 1mb, while C tries to find the block size that maximizes his profits.
C can do so by gradually increasing the block size as long as the higher orphan rate (resulting in a lower production share) is outweighed by the higher fees. As the orphan rate follows an exponential distribution and the marginal fee income tends to decrease, there will be an equilibrium where marginal revenue = marginal costs. Let's assume that C's profits are maximized with an orphan rate 0.2, so that his current blockchain production rate will be 0.288, while that of B and C 0.356 each.


I think that's the point where you need to stop, and why I think that all of this has not much sense.

I start from the idea that miners have incentives to be on a good backbone network, directly between them, and do not wait for the P2P network to bring a block to them.  In other words, the 10 or 20 big mining pools are on a rather fully meshed, high speed backbone.

I already explained why, because that diminishes their orphan rate, and they are mutually inspired to improve their network links.

If you accept that as a given, then it is impossible to start considering orphan rates that become important due to network and block size problems.  There is of course always a given orphan rate, but that orphan rate must be small.

If your (relative) orphan rate is, say, 1%, it means that your income is multiplied by 0.99.  If your block size doubles, and your relative orphan rate doubles because of that, you multiply it with 0.98.

By the time that "doubling your blocks" starts to be OFFSET by the diminishing of your income because of orphaning, you see that the orphaning rate must be HUGE.  Not 2 or 4%, but 50% or so.  

Well, that is impossible.  Because if you orphan 50% of your blocks on the chain, it means that you even orphan more than 50% of your successful blocks, which means that you don't even reach your other miners over the back bone with the blocks.

If that is true, nobody else can download the block chain.  It is being produced at a rate that is almost saturating a back bone.  So no one with a lesser link can ever download the block chain and keep up to date.  

So by the time that this problem of orphaning blocks because of their size starts influencing the income of miners, the block chain is growing so fast that NOBODY CAN DOWNLOAD IT.

This story is different if miners are random nodes in a P2P network. But they aren't.  They have all interest to invest in strong network links to other miners, exactly because of this orphaning problem.

So in other words, all this theoretical BS over how the orphaning rate offsets the desire for bigger blocks and imposes a natural equilibrium is meaningless, because if ever such an equilibrium would theoretically exist, it occurs for such big blocks that nobody can download the block chain apart from the miners themselves.

Yes, you can say that an "optimum" is reached when the network stops downloading the chain, and nothing works any more.  True, in a certain way Smiley

EDIT: I hadn't understood something in your post, but now I see what you are getting at. 

If we consider *really small* fees, then for an extra included transaction, that extra delay on the network will mean an extra probability of the block being orphaned, putting in jeopardy the whole income.  This even happens with small blocks.

Yes, this will simply result in cutting off the very lowest fees of the fee distribution, which will remain for ever in the mem pool.

I don't think that this has much to do with "optimal size" ; it only means that one doesn't include the cheapest transactions below a given fee threshold, because their extra transmission time penalises the whole income while not contributing enough to it.


That said, a market doesn't need to come to "equilibrium".  An erratically chaotic market dynamics can be fun too Smiley
sr. member
Activity: 336
Merit: 265
@aklan & @dinofelis:

So why is BU preaching equilibrium with unlimited blocks as the most prominent item in its FAQ.

Because they want to sustain the illusion of decentralization. Their supporting white paper is meaningless and doesn't model anything that can exist in reality (it models a perfectly uniform distribution of propagation and hashrate where all miners experience the same orphan rate, but if that were the case then no miner would make any profit because in that impossible scenario marginal cost = lowest cost).
full member
Activity: 149
Merit: 103
Yes, this is with block rewards constant.  Tail emission.  But in the case of rewards proportional to block length (fees), you have to multiply A's revenues with the fact that his blocks bring in more money.  He has a lower percentage of blocks on the chain, but these blocks bring him more rewards as they are bigger.

So if his big blocks bring him 20% more income per block, this is neutral.

Well, I'm not sure if we are on the same page.

Of course, you can include more transactions and collect more fees by building bigger blocks, but that doesn't solve the fundemental problem that A's fee revenues must be multipled by his blockchain production (fraction of the chain built by A), rather than by his rate of successful blocks (ratio of non-orphaned blocks).

Let me come back to my example of the three miners A, B and C, all with a hashrate of 1/3 and an orphan rate of 0.01.

Now, assume that A and B stick to a block size of 1mb, while C tries to find the block size that maximizes his profits.
C can do so by gradually increasing the block size as long as the higher orphan rate (resulting in a lower production share) is outweighed by the higher fees. As the orphan rate follows an exponential distribution and the marginal fee income tends to decrease, there will be an equilibrium where marginal revenue = marginal costs. Let's assume that C's profits are maximized with an orphan rate 0.2, so that his current blockchain production rate will be 0.288, while that of B and C 0.356 each.

The fundamental problem arises once A and B also start using a variable block size to maximize their profits. By doing so (i.e. by increasing their own block sizes) they will not only decrease their own blockchain production shares due to their higher orphan rates, but at the same time C's blockchain production share will grow and thus destroy his individual market equlibrium. To reach equilibrium again, C will now have to increase his block size once again to collect the same fees as before. So, his optimal orphan rate will be more than 0.2. This, in turn, would place A and B in a disequilibrium, who might then increase their block sizes even more, etc. It will all end up in a doom loop.

It seems that the increasing total block space supply combined with the (probably) finite demand for transactions could make the loop converge at some upper limit. However, this equilibrium would be unstable. When a miner suddenly decreases his block size, all the others would follow suit to reach their individual market equilibrium again. The miners might even end up at an unstable lower equilibrium point

As far as I can see, no stable market equilibrium can be reached by all miners at the same time. For mining market has the peculiarity that whenever a miner increases its own supply, the supply of all the other will decrease. In contrast to regular markets where the players only compete to meet the demand, Bitcoin miners also compete to increase their own supply at the cost of their competitors since total block production remains capped even with unlimited block size.
sr. member
Activity: 336
Merit: 265
Readers please click this quote to see if he replied over there. No need to put that discussion in this thread:

But I have stated what happens, which it is always devolves to majority-collusion.

My point is that this is outcome is not affected by the cap on maxblocksize.

Might be true. I've been pondering that today. If BU can successfully attack the token holders, then that should be proof that even small 1MB blocks don't stop mining cartelization/centralization.

In which case, larger blocks along with LN would provide more choice to users. As I wrote upthread, Core has masterfully fooled some people into believing they are rational with no ulterior motive.

But then I don't understand how Core could have ever expected to succeed, since the miners would naturally fork the protocol and increase the block size so they can get more revenue. Core's apparent goal of forcing users to use LN appears to be impossible to enforce (mining will always be a cartel and the cartel will not agree to be stripped of revenue).

But on the flip side, the mining cartel ostensibly doesn't want to allow off chain LN scaling (which is why they won't fix malleability) because that would compete with the miners for on chain fees.

Some have argued that enabling LN would increase overall usership and thus increase onchain transaction fee revenue.

So if BU was sincere, they could demonstrate it by including the necessary fixes to enable LN in their planned HF. Because otherwise we can look forward possibly to a monopoly on block size and thus miners squeezing the market for maximum fees inhibiting the scaling of Bitcoin.

(I'm duplicating this to two other places, because readers can't go clicking links off to so many other places to find the key points, but I am linking it here, so you can decide to reply just here if you want. Your decision obviously. I'd like to finish this discussion asap if possible.)
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
You are conflating that the fact that the equilibrium is reached inside majority-collusion with your thought that I haven't stated what occurs outside of majority-collusion. But I have stated what happens, which it is always devolves to majority-collusion.

You have stated that, yes. That was my assertion. My point is that this is outcome is not affected by the cap on maxblocksize.

Quote
Everyone was incentivized by the fact that once the 1MB limit was reached ... It was the 1MB protocol limit that provided the barrier that everyone had to try to swim far from

Justification of this assertion would require explaining away the almost linear annual doubling of the average block size, up until the saturation point.

sr. member
Activity: 336
Merit: 265
If you wanted to upend Core, then you should have more competent people who would have advised you that unbounded block size doesn't have an equilibrium.

If you have successfully demonstrated  that unbounded block size cannot reach an equilibrium outside a majority-collusion environment, I have missed it.

Yes you missed it. ...
Once we do model differing orphan rates for different miners, then the optimal strategies for mining come into play. And if you work out the game theory of that, you realize that collusion and centralization are the only possible outcome.

So you seem to be acknowledging that I am correct above...

There is no outcome that is outside a majority-collusion environment. I explained that unbounded block size cannot reach an equilibrium outside a majority-collusion environment. You said you must have missed it and I explained you did miss it.

You are conflating that the fact that the equilibrium is reached inside majority-collusion with your thought that I haven't stated what occurs outside of majority-collusion. But I have stated what happens, which it is always devolves to majority-collusion.

You are making a similar error as those two others did upthread. A 51% (or even 33% selfish mining) attack is not a change in protocol. In other words, in BTC the miners can't make huge blocks, because it violates the protocol limit of 1MB.

Collusion is collusion, irrespective of the protocol. Nakamoto consensus is only possible when a majority of participants are 'honest' as per the whitepaper terminology. Unbounded blocks does nothing to change this.

Conflation is conflation. (Meaning you apparently entirely missed the relevance of the point)

I'm trying to be respectful, but please don't waste my time. You see I have too many messages to reply to.

And as a practical matter, Bitcoin operated just fine for multiple halvings with no practical bound on blocksize.

There was minimum advised fee and there were pools doing anti-spam such as I think I've read that Luke Jr's pool rejected dust transactions.

Yes, minimum advised fee. 'Advised', as not encoded within the protocol. The fact that this worked up to the point that the production quota was finally persistently hit forms an existence proof that the system can work. The fact that it did work may or may not have something to do with all players having beneficial intent, but there it is. Indeed a populist sentiment includes the notion that it is against the best interests of all participants to do anything that kills the system. Which probably explains why our past known-majority miner (Discus Fish?) turned back from their position of mining majority without ever forming an attack from their assuredly-successful posture.

Everyone was incentivized by the fact that once the 1MB limit was reached, then the destruction of Bitcoin would ensue as is currently happening with the battle between the miner and codester cartels.

It was the 1MB protocol limit that provided the barrier that everyone had to try to swim far from. Also miners had an incentive to get minimum level of fees and they didn't yet have enough centralization to extract higher fees. And the decentralized miners at that time before ASICs also had an incentive to keep spam low since as I explained to @dinofelis today that it wasn't a fully connected mesh so propagation time was a bigger deal than he realized. Also the decentralization at that time when people were still mining on GPUs, meant there was more of an altruistically driven Nash equilibrium than now.

That not at all like the cut throat, big money economics situation now. As @dinofelis pointed out, the only altruism (and internal discord) from miners now is probably all faked to make us think there isn't a cartel.
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
BUcoin crashed last night to 200 nodes after the the new bug was discovered,

Yes, a new bug was discovered by someone desirous of performing a DoS attack upon BU. This bug was exploited by such attackers to cause a number of nodes to crash. While a temporary inconvenience, we welcome this assistance in hardening the BU system before flag day.

Quote
and then the developers had the great idea of releasing a closed source patch.

Well, not exactly. I mean, if your definition of 'closed source' is delayed release of the source, I guess so. While I was not part of the decision process, it seemed to be predicated on the fact that immediate release would reveal the precise nature of the vulnerability to additional attackers. It was done to create a window for the patch to propagate.

Whether or not that was the proper course of action is something that can be debated. Indeed, it is still being debated within the BU community. But your characterization of 'they've gone closed source' is beyond the pale.
full member
Activity: 322
Merit: 151
They're tactical
Maybe I can appear strong for people to think im weak  Roll Eyes

Some have less subtle approach Cheesy

https://youtu.be/URybdpu_NhI

Not sure who is winning, everyone is cheating anyway :p

Let's do it! I just think maybe "we" will choose other apps as a priority instead of the music app. But I am also okay with doing a music app first too. I love music and would surely use the app myself! We'll brainstorm about it soon...

I will be very enthused about apps I will myself use, because I will have many ideas of how to innovate them. My million user successes in commercial software were the ones I created because I wanted to use them (and saw the market was lacking the specific features/capabilities that I wanted). For a music app, I want it to be my music player and also keep track of all my music so I never have to hassle with backing up my music, transcoding formats, etc... And I don't want it to tie me into any walled gardens, no adware, no bullshit, etc..

Yeah we'll be "cheating" also but copying for example the way others are already cheating:

http://www.listentoyoutube.com/ (get my idea yet?)

Have you ever noticed you can overlay and obscure the SoundCloud HTML player buttons? Instant library of songs without the 10,000 plays per day app limit. I have some (clever or out-of-the-box thinking?) ideas we can discuss.

Remember our apps need a social component. There needs to be sharing (likes), commenting, etc..

But the more important point is how you the app creator will get paid. And how much money you will be able to earn creating apps.

All you need to do is make apps that people will like to tell their Facebook friends to use.

I think from there, you can start to deduce what I have in mind, but I hold off on the details while I try to finish up getting the preliminaries of the scalable, decentralized blockchain for OpenShare into code. Then we will start to talk in earnest about collaborating, launching, and making a lot of money while shocking this community with our STICKY, VERIFIED adoption rate (into the millions I expect).

I am excited because now I see that real capable app developers are contacting me. So I only need to convince you that my blockchain technology and onboarding strategy is viable, then we can go change the world.

I am on the 2 drug treatment now. Let's hope my energy is ready roll now. I need go finish up on the proposed changes for the (optional proposed app) programming language and see if it realistic for me to write the transpiler in a matter of weeks. I am going to try to go wrap that up now after doing my forum communication now. Are you interested to help on the transpiler? Should I start a Github project for that? Which name is best Async, Copute, Lucid, or Next (Nxt)?

All app programmers please keep in touch. I want to make you wealthy. Let's have fun also.

I'll make an official thread and Slack once I get through the preliminaries and am confident that my production (in code!) is back up to normal.

Yeah I think the music buisness need to do the switch with uber and p2p, with all the thousand of starving artists who will never get signed anywhere, and even the one who are signed not sure they are very happy with it, but they are locked and cant go to the toilet without asking their producer, the people to convince are more their producer for the one who are produced, and the economy is not really Blooming in this sector for a moment Smiley I can say with fair amount of confidence many are looking for new solution markets and income. And the music industry is not that good with internet since the beginning. And no producer for an artist it's the dream Smiley

Need to think also about good back office to have stats & méta infos, and good way to remunerate participants, node owner, artist, developpers.

For the language, I have mostly the low level part on the git, with the module system, the dynamic tree with reference pointer, http/rpc server, and most C runtime, and the vector lib for the raytracing. Then the modules method can be called from a js application with http/json/rpc.

For the high level part to make application, i dont have much theory Smiley I would go for something very basics who can call module function and handle Event with json like data type, and in the spirit akin to BASIC with simple one line statement easy to evaluate, charm is really the kind of things im looking to have good OOP encapsulation in module with an interface.

Js as far as i know is weak for defining good object sur typing and interface, but if I understand well, your idea is to have a source language with good sur typing and interface définition, and transpiling to js to have équivalent code.

But for me i would still rather stick to a kernel in pure C, that can expose module interface to js/html5 app, and having the node/server side in C rather than node.js. Much better for performance, memory use, and portability. The only thing it really miss now is a good html templating system to generate html5/js page based on input data. For this im not sure if the best is to have browser side generated html like angular js, or something that can generate preformated html from the node, or using xslt which can be done by both server & browser. Or something entierely different to define UI and Event handling and rpc call in html5/js.

That would do in sort to have part of application in C modules with the framework, and part of application on js/html5 who can call those modules. But having another source language to transpile this part of the app with the UI and modules interface, why not.

But all the part with binary data crypto & transcoding in js  Cry Cry Cry
legendary
Activity: 3038
Merit: 1660
lose: unfind ... loose: untight
If you wanted to upend Core, then you should have more competent people who would have advised you that unbounded block size doesn't have an equilibrium.

If you have successfully demonstrated  that unbounded block size cannot reach an equilibrium outside a majority-collusion environment, I have missed it.

Yes you missed it. ...
Once we do model differing orphan rates for different miners, then the optimal strategies for mining come into play. And if you work out the game theory of that, you realize that collusion and centralization are the only possible outcome.

So you seem to be acknowledging that I am correct above...

And as a practical matter, Bitcoin operated just fine for multiple halvings with no practical bound on blocksize.

There was minimum advised fee and there were pools doing anti-spam such as I think I've read that Luke Jr's pool rejected dust transactions.

Yes, minimum advised fee. 'Advised', as not encoded within the protocol. The fact that this worked up to the point that the production quota was finally persistently hit forms an existence proof that the system can work. The fact that it did work may or may not have something to do with all players having beneficial intent, but there it is. Indeed a populist sentiment includes the notion that it is against the best interests of all participants to do anything that kills the system. Which probably explains why our past known-majority miner (Discus Fish?) turned back from their position of mining majority without ever forming an attack from their assuredly-successful posture.
member
Activity: 107
Merit: 10

That sort of startled me, because it is catchy and it is in the vein of a "byteball" type of geekcool phonetics.

Amorphous was on my original brainstorming list (and I had thought of nebulous in the process of thinking of amorphous and mentioned nebulous as a negative 3 times on the page), but I think you are correct that Nebula is better than Amorphous or Nebulus.

But is that the meaning we want for a programming language? The language is targeted to programmers. Nebula does have a nice sound to it. Are you thinking the programming language is a feature marketed to the speculators also and thus the reason for the geekcool name?

While reading your ideas I couldn't help but think of the Bruce Lee quote - "Empty your mind, be formless, shapeless like water if you put water in the cup it becomes the cup and water can flow or it can crash."

This also hits back on your Zen ideas. Water, like your language is fluid. It has many degrees of freedom and goes where it is directed. It does not resist, it just flows. Of course, fluidity is not restricted just liquids. It can refer to anything that is readily changeable. Anything that is not fixed and rigid.

So two names that immediately come to mind are... Flow and Fluid


member
Activity: 107
Merit: 10
I agree with Shelby that music may not be the best place to start. There is too much entrenched establishment thinking in that domain by both consumers and especially content creators (although it has gotten much better in recent years). Artists are still locked into old ways and I don't believe it will be the easiest market to "attack" first.

Yes, the indie market does offer some inroads, and in time I believe that will be the place to dig in and make our mark, but I don't believe it is the best place to start in the grand scheme of things. I would tend to think a market to focus on first would be one that already has its roots in upending the tradition media/content distribution status quo. The primary ones that come to mind are podcasting/vlogging (and to a lesser extent blogging). These industries are built on the idea of creators getting their content directly in the hands of users with minimal middle man interaction. And the content creators are always looking for new and better ways to monetize their offerings.

Unlike the music industry, where there are a plethora of preconceived ideas and biases holding people back, the podcast/vlog sphere has very little of that. They want innovative distribution ideas, that is why they came into existence in the first place. They want easier/better ways to spread their "art".

I believe the largest hurdle to overcome in this market is the idea that consumers have always gotten these things "for free" and there may be some resistance to now paying for them. But the whole idea of a micropayment social media platform is that the consumers wouldn't even really feel the brunt of paying anyway since the transactions would be so small. So I don't think this will be as difficult to overcome as it initially appears.
hero member
Activity: 770
Merit: 629
In other words, PoW doesn't favor a P2P network at all

Bingo. The ultimate conclusion is PoW becomes centralized. Which is what I've been predicting since roughly late 2013 and early 2014. But the selfish mining paper was what really brought it into more focus for me. Then it became clear that propagation, unequal hashrate distribution, and thus relative orphan rate are a critical factor causing centralization.

I'm discussing in 2 threads now.  But in the other thread, I also pointed out the "Lottery" nature of rewards as a cause of pooling, and hence centralisation.  

If you don't want a larger fluctuation than 10% income in a week, then you need to win 100 times the lottery in a week (Poisson standard deviation square root rule).  There are 1000 lottery outcomes in a week in bitcoin.  So you need to be part of a 10% winner team to expect 100 wins in a week, needed for your 10% income fluctuation ; there can at most be 10 such teams.

--> pool centralization.  This is BTW the principal centralization factor: people pool, because solo miners have too great a sigma on their income.

Once you have a limited number of pools, you have a half-centralized system ; once you have that, there is great benefit for a back bone.  And you have bitcoin facebook.


(this is also why I think that there shouldn't be rewards...  The Poissonian reward lottery always makes you pool, which centralizes the system)
sr. member
Activity: 336
Merit: 265
In other words, PoW doesn't favor a P2P network at all

Bingo. The ultimate conclusion is PoW becomes centralized. Which is what I've been predicting since roughly late 2013 and early 2014. But the selfish mining paper was what really brought it into more focus for me. Then it became clear that propagation, unequal hashrate distribution, and thus relative orphan rate are a critical factor causing centralization.
hero member
Activity: 770
Merit: 629
I don't think the market will hold BU beyond an initial speculative pump before the big permadump just like ETC.

Then we are fucked with a 1MB blocksize forever, because afaik the mining cartel will never softfork adopt SegWit?

This is my intimate conviction.

Until bitcoin loses its brand name advantage, and forking becomes possible without fearing to lose the brand name.
sr. member
Activity: 336
Merit: 265
I don't think the market will hold BU beyond an initial speculative pump before the big permadump just like ETC.

Then we are fucked with a 1MB blocksize forever, because afaik the mining cartel will never softfork adopt SegWit?

Either way, Bitcoin is looking more and more clusterfucked.
hero member
Activity: 770
Merit: 629
You are not factoring in that not all nodes have links directly to all nodes, i.e. the peer network is not a fully connected mesh topology. And a node will not forward a block until it has completely verified it (so that it can't be leveraged for a spam amplification attack).

My idea is that from the moment that network propagation delays between (important) miners matter, they will form a strong backbone network, because they have all mutual incentives to do so.  If you invest in millions of hardware, you can invest in a strong network link to another big mining pool, *and that pool has also an advantage to set up a strong link to yours*.

This is why I see a PoW system as a strong (almost) fully interconnected backbone of big mining pools (I don't know, 10, 20, 5, 50, whatever), and all the other nodes connecting to them : these back bone miner pool data centers are also interested to get directly the user's transactions (and maybe keep the most juicy ones for themselves unless the user also forwards it to others).

In other words, PoW doesn't favor a P2P network at all, but a big backbone of strongly interconnected miner data centers, serving directly the users ; those funny users that really want to, can use other user's nodes as P2P proxy servers instead of connecting directly to one of the miner pool data centers.

"decentralisation" then simply consists in hoping that the bosses of the miner pool data centers don't collude too much.  This is not impossible, if they are 15 or so, in different countries.

The strong backbone network between important pools is unavoidable, because it is mutually advantageous.  It also kills the idea of selfish mining, because selfish mining counts on your selfish block overtaking potentially the public block when you learn about it: on a strong fully connected back bone, that is impossible: by the time YOU learn about the public block, and you want to publish your selfish chain, all others ALSO received the public block (with their direct links) and your selfish block ALWAYS comes late.  So when network delays matter, automatically a strong backbone will emerge.
legendary
Activity: 1358
Merit: 1014
BUcoin crashed last night to 200 nodes after the the new bug was discovered, and then the developers had the great idea of releasing a closed source patch. This thing is pretty much dead, nobody is running nodes except Roger Ver at this point and the couple brainwashed people that will run BU no matter what.

How can the miners hard fork after this? If they do, they are insane or paid to cause damage. I don't think the market will hold BU beyond an initial speculative pump before the big permadump just like ETC.
sr. member
Activity: 336
Merit: 265
@dinofelis, you are super smart but not a blockchain developer expert (yet)  Tongue

Yes, this is with block rewards constant.  Tail emission.  But in the case of rewards proportional to block length (fees), you have to multiply A's revenues with the fact that his blocks bring in more money.  He has a lower percentage of blocks on the chain, but these blocks bring him more rewards as they are bigger.

So if his big blocks bring him 20% more income per block, this is neutral.

You are correct that I had an error in my (discombobulated/delirium due to meds) thinking w.r.t. to the losses due to his relative orphan rate increasing proportionally as I showed in my original Poison process math post, because for example an increase in revenue by 20% is not offset by  a commensurate proportional increase of relative orphan rate e.g. from 0.1% to 0.12%.

Which is even more damning against @Peter R's thesis as you point out.

However, the thing to keep in mind is to get an orphan rate of 0.2 by network propagation, it means that on average your blocks take 0.2 of the block period to get to the others.  0.2 of 10 minutes is 2 minutes.  If you have good links, in order for them to take 2 minutes, they must be mindbogglingly HUGE.
If it takes 2 minutes to pump a block to another miner with whom you are connected with a 10 Gb/s link, we are talking about 100 GB blocks or something.

Per the research I already cited upthread, the current network diameter is around 6 seconds. But it does takes minutes to reach 95% of the network. You are not factoring in that not all nodes have links directly to all nodes, i.e. the peer network is not a fully connected mesh topology. And a node will not forward a block until it has completely verified it (so that it can't be leveraged for a spam amplification attack).

As I said earlier, this kind of argument only starts to play a role when the network is already dead.  Because if a significant fraction of the block time (10 minutes in bitcoin) is what it takes for miners amongst themselves to propagate blocks and get them orphaned, no "normal node user" can ever obtain the block chain up to date, because normal users have a worse network connection to the miners (source of block chain) than miners amongst themselves.  Especially if network quality is impacting seriously on their revenues, miners will have the best possible links between them: mutually advantageous (and much less costly than the mining itself: a 10 Gb/s link to Joe MiningPool is less expensive than your mining gear).

You are correct that as the network becomes centralized into a few pools, then a hub-and-spoke topology is sufficient, but then we don't have decentralization any more.

But the larger point you are not mentioning is that larger blocks are a weapon against decentralization. For the reason stated above, and because those with direct fast links (and collectively more than 33% of systemic hashrate) get disproportionately more reward than their hashrate portion. That is the famous selfish mining paper and more recently the optimal mining strategies paper which adds more strategies.

If you really want a solution to this problem, then "block length" is not the right parameter, but block income is:

one should cap the "block reward + fee", to, say, 20 btc.  As such, miners can make all the blocks they want, long, short, but their TOTAL INCOME (reward + fees) is capped to 20 btc FOREVER (part of the protocol).   A block with a total reward larger than 20 btc is simply invalid.  

Miners will simply take their fees as pseudo-anonymous transactions with 0 or low fee.  Tongue



None of all this is going to happen.  Miners want small blocks and a fighting fee market.  The cartel you are talking about with large blocks only sets in with such incredibly large blocks, that they are out of the question in the next few years ; if you want a mining cartel, the telephone between mining pool bosses is a much more useful device than multi-GB blocks that saturate small miner's network links and most of the user nodes.

You had a mistake in your concept of propagation through the network. You are thinking a decentralized network is a fully connected mesh topology.

You don't need that much larger blocks to cause an amplification of propagation delay to the smallest miners in order to destroy decentralization and take 51% control. Also it only requires a very small advantage in relative orphan rate in order to slowly accumulate more hashrate than the opposition, so don't require the 100GB blocks you are computing.

Bitcoin is already centralized

Yes in that case they don't need huge blocks to destroy decentralization because it is already destroyed. In that case, they need to be able increase blocks to whatever they think is the level of transaction fees that maximizes their revenue (volume x transaction fees).

In either case, I am showing that big blocks are a cartelization paradigm. I am being thorough. Please don't fault me for being thorough, just because the network is already centralized.
sr. member
Activity: 336
Merit: 265

That sort of startled me, because it is catchy and it is in the vein of a "byteball" type of geekcool phonetics.

Amorphous was on my original brainstorming list (and I had thought of nebulous in the process of thinking of amorphous and mentioned nebulous as a negative 3 times on the page), but I think you are correct that Nebula is better than Amorphous or Nebulus.

But is that the meaning we want for a programming language? The language is targeted to programmers. Nebula does have a nice sound to it. Are you thinking the programming language is a feature marketed to the speculators also and thus the reason for the geekcool name?
sr. member
Activity: 336
Merit: 265
Byteball could be the killer of BTC. The end of BTC's dominant position.

Unless Tony changed it since I discussed it with him in his official thread in Q4 2016, in my opinion Byteball has at least four major flaws:

1. Afair, the transaction fee mechanism doesn't scale properly with the appreciation in the price of bytes. I don't remember all the details, but this seemed to me at the time to totally break the design going forward.

2. My analysis is the 12 witnesses can't realistically be changed without a hard fork (the algorithm for doing so won't work in reality unless the whales coordinate to make it so, i.e. not decentralized) if 50% of them stop functioning or colluding. In other words, another CartelCoin or hardforking chaos in the making.

3. Afaik, the transaction confirmations are fast but not sub-second (because you have to wait for 7 of the 12 witnesses to sign a new stability point tip), thus not optimal for apps that need very low-latency for onchain actions. Steem's (Bitshares') Graphene (DPoS) has I believe faster 1-confirmations (due to ability of whales to monitor and replace witnesses at will and because each witness produces its own block) but also not sub-second (especially if security against orphans requires more confirmations). Graphene has no asynchrony, so I am not arguing it is better than Byteball. But neither of them met my stringent design requirements. Byteball is interesting, which is why I dedicated an entire section of my whitepaper to discuss its stability point algorithm. I applaud Tony on his clever DAG consensus algorithm.

4. Afaics, his distribution model totally ruined any chance for a funding model to onboard the app developers, content providers, and users. This is really the killer mistake.

Also I don't think the use of JSON as a smart contract language is any tremendous innovation. He appears to be putting a lot of energy into that, so that is an entirely different focus from mine. One of the experiments I am working on is I am trying to create a better programming language for programming apps (a statically typed derivative of JavaScript that transpiles initially to TypeScript), not just smart contracts.


You don't see me going around in every Byteball thread bashing it. I will not do that. Good luck with it.

Btw, the Byteball logo/avatar is quite unique. Spartan (a la Google) and grabs the eye (personally it makes me think of Hal's camera eye in Space Odyssey).

Pages:
Jump to: