Author

Topic: Gold collapsing. Bitcoin UP. - page 144. (Read 2032270 times)

sr. member
Activity: 384
Merit: 258
July 06, 2015, 06:29:54 PM
for each block in the Blockchain, which will help answering Q1.  Does anyone know where I can get comprehensive data on the typical node's mempool size versus time to help answer Q2?
statoshi.info might help !
EDIT: Export feature is in the "wheel" entry of the menu
sr. member
Activity: 420
Merit: 262
July 06, 2015, 06:22:42 PM
The coin needs to be the first legitimate instance of its kind, had a fair start/emission, and a market niche
-----------------------------------------------------------------------------------------------------------------
Litecoin FAIL (not the first of its kind)
Peercoin FAIL (no market niche)
Bytecoin FAIL (not fair start)
Boolberry FAIL (not the first of its kind)
Ethereum FAIL (questionable start)
All shitcoins FAIL (2-3 counts)

Only BTC and XMR fulfill all conditions, so it makes sense to invest into them (and them alone). To be fully hedged, you can keep 99.8% in BTC and set 0.2% aside in XMR. Going over this ratio, is overinvesting in XMR.

It is not hard to come with these understandings after a generous overview of the top 50 altcoins, reason why I'm as uninpressed with LTC market as with its innovative features (none).

+1 on rpietila's logic.

I would only add it needs a reasonable shot of attaining critical mass, so the niche needs to be evaluated for that probability.

I am thinking Trapqoin has a potentially large market  Tongue

I'm thinking about maybe making Trapqoin by Shetty One Eye.
legendary
Activity: 4760
Merit: 1283
July 06, 2015, 06:16:23 PM

It behoves him to continue posting and proving his thread has the largest readership on bitcointalk by far because it assures he will win his case...

...the more you guys fight him and post here, the more you help him retain 3000 BTC.

Don't you realize he is making the technical errors on purpose!


Good catch.  I had not thought of that.

sr. member
Activity: 420
Merit: 262
July 06, 2015, 06:14:42 PM
I favor Adam Backamoto's

stop equating Adam to Satoshi.  no contest.

you have a serious Daddy problem.

No where near as serious as those who consider cypherdoc to be some sort of daddy figure.  There are probably vastly fewer who consider you to be 'the LeBron James of Bitcoin' than you and your attorney might imagine.  Probably there are a handful though which is pretty sad.

The LeBron assertion is hilariously funny though one way or another.  Whether it was you or your attorney who came up with that one, kudos for the comic relief.

It behoves him to continue posting and proving his thread has the largest readership on bitcointalk by far because it assures he will win his case...

...the more you guys fight him and post here, the more you help him retain 3000 BTC (perhaps in collusion with HF if they aren't just derelict and who knows  Huh perhaps even the judge via his well connected Obama legal counsel  Huh that being wild speculation Huh  not an accusation ...).

Don't you realize he is either making the technical errors on purpose or it is a strategy he inherited by dumbdorc luck!

If they wanted to win, they wouldn't argue that DorkyDoc didn't do adequate promotion (because there is an entire thread on this forum showing he did, and now we have a core Dev admitting he invested 100 BTC based on Dorc's thread which adds validity to his promotional value). Rather they would...

P.S. Gmax you committed a category error. It doesn't matter to his case if he slobbers on the technology (and because so many people can't understand the technology including some of the readers here, the attorneys, and the judge); it only matters that he has a huge following.
legendary
Activity: 4760
Merit: 1283
July 06, 2015, 05:38:34 PM

no, memory is not just used for 1MB blocks.  it's also used to store the mempools plus the UTXO set.  large block attacks

Again, you're wrong on the technology. The UTXO set is not held in ram. (There is caching, but its arbritary in size, controlled by the dbcache argument).

Quote
have the potential to collapse a full node by overloading the memory.  at least, that's what they've been arguing.

"They" in that case is sketchy nutballs advocating these "stress tests", and _you_ arguing that unconfirmed transactions are the real danger.

Super weird that you're arguing that the Bitcoin network is overloaded with average of space usage in blocks, while you're calling your system "under utilized" when you're using a similar proportion of your disk and enough of your ram to push you deeply into swap.
...

Thanks for this tid-bit about the UTXO database.  This is the kind of info that someone who is mildly familiar with database technology but doesn't really want to make a lifes' work of studying the technicals find cumbersome to pick out.  Especially since modern Bitcoin is already past what is realistic to run behind my ($80/mo) connectivity so unless/until I set up a VM somewhere it's kind of a textbook exercise.

Just from knowing a little about database tuning and ram vs. disk-backed memory, I have always wondered if people have make projections about performance of the validation process under different scenarios and whether they can/will become problematic.  One think I've always wondered if it would be possible to structure transactions such that they would load validation processes to heavily on queue, and particularly if it is common case to push more and more data out of the dbcache.  Any thoughts on this that can be quickly conveyed?

legendary
Activity: 1260
Merit: 1116
July 06, 2015, 05:37:54 PM
cypherdoc, who is paying you now ? (KNC ?)

Here bro. I heard you liked tactical pitchforks Grin

legendary
Activity: 1414
Merit: 1000
July 06, 2015, 05:33:29 PM
cypherdoc, who is paying you now ? (KNC ?)

LOL, no one.

Then you are losing money. :-)
legendary
Activity: 1764
Merit: 1002
July 06, 2015, 05:29:47 PM
cypherdoc, who is paying you now ? (KNC ?)

LOL, no one.
legendary
Activity: 1414
Merit: 1000
July 06, 2015, 05:27:20 PM
cypherdoc, who is paying you now ? (KNC ?)
legendary
Activity: 4760
Merit: 1283
July 06, 2015, 05:23:20 PM

... what should we say about the correlation between finding out that you've taken hundreds of thousands of dollars in payments for paid shilling and finding out loud and opinionated you are on this blocksize subject?

I will not be surprised if this is true. Only I'll expect higher price  ... few millions. He is fighting hard.

A bit to hard I'd say.  He's losing his support base.  The more technical people first, but eventually most of those who can be dazzled by technobabble word-salad that cypherdoc himself doesn't really understand will fall away as well.

If I were hiring cypherdoc to shill for me he would have been fired about a month ago when he reached an inflection point of doing more harm than good.

Rather amazing that he still is posting.  For starters, his counsel should have advised him to stop posting here.

Yup.  He had an opportunity to save face and bow out that way but he's blown that one.  Now he'll have to think of a different way, or hopefully stick around and continue to show the world the Gavinista's true colors.

sr. member
Activity: 280
Merit: 250
July 06, 2015, 05:19:35 PM

... what should we say about the correlation between finding out that you've taken hundreds of thousands of dollars in payments for paid shilling and finding out loud and opinionated you are on this blocksize subject?

I will not be surprised if this is true. Only I'll expect higher price  ... few millions. He is fighting hard.

A bit to hard I'd say.  He's losing his support base.  The more technical people first, but eventually most of those who can be dazzled by technobabble word-salad that cypherdoc himself doesn't really understand will fall away as well.

If I were hiring cypherdoc to shill for me he would have been fired about a month ago when he reached an inflection point of doing more harm than good.



Rather amazing that he still is posting.  For starters, his counsel should have advised him to stop posting here.
legendary
Activity: 4760
Merit: 1283
July 06, 2015, 05:11:55 PM

... what should we say about the correlation between finding out that you've taken hundreds of thousands of dollars in payments for paid shilling and finding out loud and opinionated you are on this blocksize subject?

I will not be surprised if this is true. Only I'll expect higher price  ... few millions. He is fighting hard.

A bit to hard I'd say.  He's losing his support base.  The more technical people first, but eventually most of those who can be dazzled by technobabble word-salad that cypherdoc himself doesn't really understand will fall away as well.

If I were hiring cypherdoc to shill for me he would have been fired about a month ago when he reached an inflection point of doing more harm than good.

legendary
Activity: 1764
Merit: 1002
July 06, 2015, 05:10:48 PM
no, memory is not just used for 1MB blocks.  it's also used to store the mempools plus the UTXO set.  large block attacks
Again, you're wrong on the technology. The UTXO set is not held in ram. (There is caching, but its arbritary in size, controlled by the dbcache argument).

Quote
There is no requirement that mempools be in sync, -- in fact, they're not and the whole purpose of the blockchain is to synchronize nodes.  The mempools of nodes with identical fee and filtering policies and whom are similarly positioned on the network will be similar, but any change in their policies will make them quite different.
well, that was precisely Peter's mathematical point the other day that you summarily dismissed.  f2pool and Antminer are NOT in a similar position on the network as they are behind the GFC.  they have in fact changed their verification policies in response to what they deem are large, full blocks as a defensive measure.  that's why their average validation times are 16-37sec long and NOT the 80ms you claim.  thus, their k validation times of large blocks will go up and so will their number of 0 tx SPV defensive blocks. and that's why they've stated that they will continue to mine SPV blocks.  thanks for making his point.
PeterR wasn't saying anything about mempools, and-- in fact-- he responded expressing doubt about your claim that mempool size had anything to do with this.  Moreover, I gave instructions that allow _anyone_ to measure verification times for themselves.  Your argument was that miners would be burned by unconfirmed transactions, I responded that this isn't true-- in part because they can keep whatever mempool size they want.

To further make the point about mempools, here is what the mempool looks like on a node with mintxfee=0.0005 / minrelaytxfee=0.0005 set:


$ ~/bitcoin/src/bitcoin-cli  getmempoolinfo
{
    "size" : 301,
    "bytes" : 271464
}


Quote
it also is a clear sign that miners do have the ability and financial self interest to restrict block sizes and prevent bloat in the absence of a block limit.
Their response was not to use smaller blocks, their response was to stop validating entirely.  (And, as I pointed out-- other miners are apparently mining without validating and still including transactions).

Quote
these SPV related forks have only occurred, for the first time ever, now during this time period where spammers are filling up blocks and jacking up the mempool.  full blocks have been recognizable as 950+ and 720+kB.  this is undeniable.
If we're going to accept that every correlation means causation;  what should we say about the correlation between finding out that you've taken hundreds of thousands of dollars in payments for paid shilling and finding out loud and opinionated you are on this blocksize subject?

In this case, these forks are only visible by someone mining an invalid block, which no one had previously done for over a year.

Quote
if they are seeing inc orphans, why haven't they retracted their support of Gavin's proposal
They are no longer seeing any orphans at all, they "solved" them by skipping validation entirely. They opposed that initial proposal, in fact, and suggested they could at most handle 8MB, which brought about a new proposal which used 8MB instead of 20MB though only for a limited time. Even there the 8MB was predicated on their ability to do verification free mining, which they may be rethinking now.

Quote
i don't believe that.
I am glad to explain things to people who don't understand, but you've been so dogmatically grinding your view that it's clear that every piece of data you see will only "confirm" things for you; in light of that I don't really have unbounded time to waste trying. Perhaps someone else will.


On my phone now so this is going to be hard to respond.

First, nice try pretending UTXO is not potentially a memory problem. We've had long debates about this on this thread so you are just being contrary.

Second, my reference to Peters argument above aid nothing about mempool; I was talking  about block verification times. You're obfuscation again.

Third, unlike SPV mining if 0 tx blocks like now, didn't mean they would do the same without a limit. Perhaps they would pare down block sizes to an efficient level of other larger miners were allowed to clear out the unconfirmed TX set.

Fourth, you have no shame do you with the ad hominems? No, I'm not endorsing for any company like I told everyone ahead of time I was doing for HF.
legendary
Activity: 1414
Merit: 1000
July 06, 2015, 04:55:52 PM
... what should we say about the correlation between finding out that you've taken hundreds of thousands of dollars in payments for paid shilling and finding out loud and opinionated you are on this blocksize subject?


I will not be surprised if this is true. Only I'll expect higher price  ... few millions. He is fighting hard.
staff
Activity: 4284
Merit: 8808
July 06, 2015, 04:38:13 PM
no, memory is not just used for 1MB blocks.  it's also used to store the mempools plus the UTXO set.  large block attacks
Again, you're wrong on the technology. The UTXO set is not held in ram. (There is caching, but its arbritary in size, controlled by the dbcache argument).

Quote
There is no requirement that mempools be in sync, -- in fact, they're not and the whole purpose of the blockchain is to synchronize nodes.  The mempools of nodes with identical fee and filtering policies and whom are similarly positioned on the network will be similar, but any change in their policies will make them quite different.
well, that was precisely Peter's mathematical point the other day that you summarily dismissed.  f2pool and Antminer are NOT in a similar position on the network as they are behind the GFC.  they have in fact changed their verification policies in response to what they deem are large, full blocks as a defensive measure.  that's why their average validation times are 16-37sec long and NOT the 80ms you claim.  thus, their k validation times of large blocks will go up and so will their number of 0 tx SPV defensive blocks. and that's why they've stated that they will continue to mine SPV blocks.  thanks for making his point.
PeterR wasn't saying anything about mempools, and-- in fact-- he responded expressing doubt about your claim that mempool size had anything to do with this.  Moreover, I gave instructions that allow _anyone_ to measure verification times for themselves.  Your argument was that miners would be burned by unconfirmed transactions, I responded that this isn't true-- in part because they can keep whatever mempool size they want.

To further make the point about mempools, here is what the mempool looks like on a node with mintxfee=0.0005 / minrelaytxfee=0.0005 set:


$ ~/bitcoin/src/bitcoin-cli  getmempoolinfo
{
    "size" : 301,
    "bytes" : 271464
}


Quote
it also is a clear sign that miners do have the ability and financial self interest to restrict block sizes and prevent bloat in the absence of a block limit.
Their response was not to use smaller blocks, their response was to stop validating entirely.  (And, as I pointed out-- other miners are apparently mining without validating and still including transactions).

Quote
these SPV related forks have only occurred, for the first time ever, now during this time period where spammers are filling up blocks and jacking up the mempool.  full blocks have been recognizable as 950+ and 720+kB.  this is undeniable.
If we're going to accept that every correlation means causation;  what should we say about the correlation between finding out that you've taken hundreds of thousands of dollars in payments for paid shilling and finding out loud and opinionated you are on this blocksize subject?

In this case, these forks are only visible by someone mining an invalid block, which no one had previously done for over a year.

Quote
if they are seeing inc orphans, why haven't they retracted their support of Gavin's proposal
They are no longer seeing any orphans at all, they "solved" them by skipping validation entirely. They opposed that initial proposal, in fact, and suggested they could at most handle 8MB, which brought about a new proposal which used 8MB instead of 20MB though only for a limited time. Even there the 8MB was predicated on their ability to do verification free mining, which they may be rethinking now.

Quote
i don't believe that.
I am glad to explain things to people who don't understand, but you've been so dogmatically grinding your view that it's clear that every piece of data you see will only "confirm" things for you; in light of that I don't really have unbounded time to waste trying. Perhaps someone else will.
sr. member
Activity: 280
Merit: 250
July 06, 2015, 03:43:41 PM
Even if mining pools set higher fees, aren't the unconfirmed TX's still added to their mempools?
No.

In other words, cypherdoc is clueless about how Bitcoin works.

Moar like the LeBron of Lasik, am I right?   Cheesy

Doubtful.  Dr. Lowelife spends every waking moment on bitcointalk and reddit so it seems.  A malpractice suite which stuck is one hypothesis which springs to mind, and could explain an early and intense interest in asset protection.  A clinic full of subordinate eye frying minions would be another.  Neither these or countless others are really interesting enough for me to spend much time on, but perhaps others who are more into such things would ferret out this mystery.

Many of my co-workers kept a stock market ticker within easy visual reach and referred to them about 60 times per hour as they performed the tasks of the day.  That's fine for code monkeys, but for someone doing eye surgery with laser beams it's an entirely different ball of wax.



So now we know what his "highly successful business" entailed.   Roll Eyes
legendary
Activity: 1372
Merit: 1000
July 06, 2015, 03:43:03 PM
since small pools can also connect to the relay network, and i assume they do, there is no reason to believe that large miners can attack small miners with large blocks.  in fact, we've seen the top 5 chinese miners deprecated due to the GFC making it clear they CANNOT perform this attack despite what several guys have FUD'd.
Basic misunderstanding there--- Being a larger miner has two effects: One is throughput not latency related: Being larger creates a greater revenue stream which can be used to pay for better resources.   E.g. if the income levels support one i7 CPU per 10TH/s of mining, then a 10x larger pool can afford 10x more cpus to keep up with the overall throughput of the network, which they share perfectly (relay network is about latency not so much about throughput-- its at best a 2x throughput improvement, assuming you were bandwidth limited);    the other is latency related,   imagine you have a small amount of hashpower-- say 0.01% of the network-- and are a lightsecond away on the moon.  Any time there is a block race, you will lose because all of the earth is mining against you because they all heard your block 1+ seconds later.  Now imagine you have 60% of the hashpower on the moon, in that case you will usually win because even though the earth will be mining another chain, you have more hashpower. For latency, the size of miner matters a lot, and the size of the block only matters  to the extent that it adds delay.

When it comes to orphaning races miner sizes matters, in some amount that is related to the product of the size-of-the-miner and time it takes to validate a block.

Then why don't we decrease the blocktime from 10 min down to let's say 2 min. This way we can also have more transactions/second without touching the blocksize.
Ouch,  the latency related issues issues are made much worse by smaller interblock gaps once they are 'too small' relative to the network radius. When a another block shows up on the network faster than you can communicate about your last you get orphaned.  And for throughput related bottlenecks it doesn't matter if X transactions come in the form of a 10mb block or 10 1mb blocks.



I have highlighted in red what should be considers external capital investment costs and part of the business decisions of miners, how these issues are resolved is not part of the bitcoin protocol. Its not up to the developers to optimize for miners in China or the moon, or on earth for that matter.

If someone was to build 60% of the hashing power, all the power too them, the Bitcoin protocol manages only so much and then there is game theory and the Nash equilibrium to manage the extreme circumstances like 60% of the hashing power coming from a single facility in China or the Moon.    

When it comes to orphaning races miner sizes matters as does optimizing your business to leverage the inherent limitations in technology, and the guidelines of the protocol. Managing orphans is an essential function that keeps the incentives in the Bitcoin Protocol distributed and functioning, reporting a 4% orphan rate from memory when your pool was small and starting out is very different than publishing verifiable numbers. The protocol was designed with the fact that we dont live in a harmonious world where resources are not optimally distributed.

Why should we change it?
legendary
Activity: 1764
Merit: 1002
July 06, 2015, 03:37:02 PM
meanwhile, my full nodes sit here totally unstressed and under-utilized.  Roll Eyes



i thought gmax et al said "large" blocks were going to collapse my nodes?

I've been reading this thread since a long time, and mostly enjoyed the economic insights it used to be about.  However, I can only agree with those who see cypherdoc's reputation fading with his supposedly technical comments like the one above.  It has been pointed out repeatedly and should be clear as day - currently we have the 1 MB limit you are complaining about.  That's precisely why your nodes are "unstressed and under-utilized".  From the current stress on your nodes, you can at best guess very vaguely at what they would be doing with larger blocks.  I don't see why that's an argument you make in favour of increasing the blocksize.  (Same as your comments about "full" blocks that were debunked by others above.)

no, memory is not just used for 1MB blocks.  it's also used to store the mempools plus the UTXO set.  large block attacks have the potential to collapse a full node by overloading the memory.  at least, that's what they've been arguing.
legendary
Activity: 1764
Merit: 1002
July 06, 2015, 03:33:00 PM
since small pools can also connect to the relay network, and i assume they do, there is no reason to believe that large miners can attack small miners with large blocks.  in fact, we've seen the top 5 chinese miners deprecated due to the GFC making it clear they CANNOT perform this attack despite what several guys have FUD'd.
Basic misunderstanding there--- Being a larger miner has two effects: One is throughput not latency related: Being larger creates a greater revenue stream which can be used to pay for better resources.   E.g. if the income levels support one i7 CPU per 10TH/s of mining, then a 10x larger pool can afford 10x more cpus to keep up with the overall throughput of the network, which they share perfectly (relay network is about latency not so much about throughput-- its at best a 2x throughput improvement, assuming you were bandwidth limited);    the other is latency related,   imagine you have a small amount of hashpower-- say 0.01% of the network-- and are a lightsecond away on the moon.  Any time there is a block race, you will lose because all of the earth is mining against you because they all heard your block 1+ seconds later.  Now imagine you have 60% of the hashpower on the moon, in that case you will usually win because even though the earth will be mining another chain, you have more hashpower. For latency, the size of miner matters a lot, and the size of the block only matters  to the extent that it adds delay.

i don't believe that.  when i ran my small solo mining pool, i'll bet that the quality of my resources and bandwidth were superior to that of the large mining pools i've seen in the videos.  furthermore, if my small pool is connected to the same relay network as a large miner, then the transmission of our respective blocks on the moon should reach earth at the same time, thus, our respective chances to find the next block simply goes back to our respective % hashrates compared to the network.
Then why don't we decrease the blocktime from 10 min down to let's say 2 min. This way we can also have more transactions/second without touching the blocksize.
Ouch,  the latency related issues issues are made much worse by smaller interblock gaps once they are 'too small' relative to the network radius. When a another block shows up on the network faster than you can communicate about your last you get orphaned.  And for throughput related bottlenecks it doesn't matter if X transactions come in the form of a 10mb block or 10 1mb blocks.




legendary
Activity: 1135
Merit: 1166
July 06, 2015, 02:33:19 PM
meanwhile, my full nodes sit here totally unstressed and under-utilized.  Roll Eyes



i thought gmax et al said "large" blocks were going to collapse my nodes?

I've been reading this thread since a long time, and mostly enjoyed the economic insights it used to be about.  However, I can only agree with those who see cypherdoc's reputation fading with his supposedly technical comments like the one above.  It has been pointed out repeatedly and should be clear as day - currently we have the 1 MB limit you are complaining about.  That's precisely why your nodes are "unstressed and under-utilized".  From the current stress on your nodes, you can at best guess very vaguely at what they would be doing with larger blocks.  I don't see why that's an argument you make in favour of increasing the blocksize.  (Same as your comments about "full" blocks that were debunked by others above.)
Jump to: