Author

Topic: Gold collapsing. Bitcoin UP. - page 139. (Read 2032248 times)

legendary
Activity: 1764
Merit: 1002
July 07, 2015, 05:29:14 PM
great Reddit thread about how the network of full nodes is VERY capable of handling much more than the FUD 2-3TPS being pushed.  it looks like it can handle anywhere from 221-442TPS at peak.  no wonder my full nodes have been sitting around unstressed and underutilized:

https://www.reddit.com/r/Bitcoin/comments/3cgyiv/new_transaction_record_442_txs_the_nodes_endured/
legendary
Activity: 4690
Merit: 1276
July 07, 2015, 05:17:02 PM
Clean and synched mempools makes for a cleaner blockchain, else garbage in - garbage out. Most mempools are synched because node owners don't usually mess with tx policy. They accept the defaults.
The blockchain itself constain substantial counter-eficidence. Any block over 750k is running with changed settings; as are a substantial chunk of the transactions.  I think this is all well and good, but it's not the case that its all consistent.

IBLT doesn't currently exist, and other mechenisms like the relay network protocol don't care about mempool synchronization levels.

IBLT does exist as it has been prototyped by Kalle and Rusty. It is just nowhere near ready for a pull request.
It has never relayed a _single_ block, not in a lab, not anywhere. It does _not_ exist. It certantly can and will exist-- though it's not yet clear how useful it will be over the relay network-- Gavin, for example, doesn't believe it will be useful "until blocks are hundreds of megabytes".

While you are bumming around here, Greg, perhaps you could comment on the first thing which hit me when I looked into IBLT.  Nobody else has.  The thought hit my partially because of who seemed to be promoting it.

1) w/ question

Solex posits the 'garbage in/garbage out' principle which IBLT is supposed to be able to address by helping everyone's mempool be syncronized.  It imediately struck me that some (if not most) people would consider blacklisted or non-whitelisted UTXOs as being 'garbage'.  It struck me that IBLT could be used as an efficient way for miners to be sure that they had 'properly' cleared their transaction list of 'garbage' transactions so they didn't 'waste their time' mining the 'wrong' transactions.  Has that danger/potential been explored by the techies?

2) w/o question

From my time as a mechanic and having a life-long interest in mechanical things, I will state that for optimizing efficiency and power, tight tolerances and precision are the way to go.  The trouble is that fairly small things that go wrong (say, a partial blockage of the radiator) can cause them to seize up.  When I raced motorcycles, the high performance ones were expected to be re-built after each race and the performance decline if one did not do this, or if any little thing went wrong was very noticeable and always fatal if one was performing near the top of the field (I was not.)

For rugged dependability one wants sloppy clearances, way over-designed parts, etc.  They don't run efficiently, but a lot of time that does not matter.  The most important thing is that the performance is predictable.

Trying valiantly to achieve high precision synchronization of every miner's mempools in nearly real-time across a network which is (hopefully) deliberately distributed seems like a bad idea.  Loose 'blocked' tolerances in transaction sequencing leaving construction completely up to individual miners just somehow 'feels' to me to be the way to go from a defensibly standpoint.  (Of course I don't give two shits about 0-conf transactions and consider them a negative, so it's easier for me to feel this way.)

But don't you think that I'm saying anything bad about it-- I'm not. Cypherdoc was arguing that mempools were (and had) to be the same, and cited IBLT as a reason---- but it cannot currently be a reason, because it doesn't exist.  Be careful about assigning virtue to the common fate aspect of it-- as it can make censorship much worse. (OTOH, rusty's latest optimizations reduce the need for consistency; and my network block coding idea-- which is what insired IBLT, but is more complex-- basically eliminates consistency pressure entirely)
...

Praise God!


legendary
Activity: 1764
Merit: 1002
July 07, 2015, 04:52:26 PM
On the topic of block verification times, people on Reddit are saying this block (filled with one huge TX) took up to 25 seconds to verify:
yes, they're actually quoting pieter and I from #bitcoin-dev (telling the miner in advance that the transaction he was creating would take a _LONG_ time to verify). They created a huge non-standard 1MB transaction and part of the verification time is quadratic (in the number of inputs).

It's actually possible to create a block that would take many minutes to verify, though not with standard transactions-- only something contrived.


and these have to be self mined, correct?
staff
Activity: 4284
Merit: 8808
July 07, 2015, 04:12:51 PM
On the topic of block verification times, people on Reddit are saying this block (filled with one huge TX) took up to 25 seconds to verify:
yes, they're actually quoting pieter and I from #bitcoin-dev (telling the miner in advance that the transaction he was creating would take a _LONG_ time to verify). They created a huge non-standard 1MB transaction and part of the verification time is quadratic (in the number of inputs).

It's actually possible to create a block that would take many minutes to verify, though not with standard transactions-- only something contrived.
legendary
Activity: 1078
Merit: 1006
100 satoshis -> ISO code
July 07, 2015, 04:02:08 PM
Quote from: Luke-Jr
Eligius does not do any SPV mining. Empty blocks are generated only after the previous block has been fully verified, but before the next block's transaction set has been calculated.


It may come down to how you defined SPV mining I guess he is saying they try to mine on top of valid blocks and not empty ones.but if you get lucky you may have empty blocks in a row?

Would that be SPV mining?

Let's call it "empty block mining" instead.  He's right that it's not strictly SPV mining if you've indeed verified the previous block, but I think people are interested in the behaviour of Miners in the time between when a miner could begin hashing on an empty block, and when the hashers are actually working on a new non-empty block.  

So then there's:

   (1) empty block mining (previous block verified but new transaction set not built).
   (2) empty block mining (previous block not verified).  

EDIT: it would be nice to make a diagram to visualize all the steps that take place in the mining process.

Tier Nolan has provided a logical explanation of how SPV mining should be done, which of course is just part of the end-to-end process to cut over to a new working block.
https://bitcointalksearch.org/topic/m.11799081
legendary
Activity: 3836
Merit: 4969
Doomed to see the future and unable to prevent it
July 07, 2015, 03:47:24 PM
:really off topic but I so rarely can add to the discussions. Tongue

Actually in order to address memory out of the address range of the cpu bus a "page swap" method was used which had been used on mainframes for many years, this was called expanded memory and was in 16k chunks which was very slow. with the 286 line extended memory was introduced and the cpu had to go into extended mode in order to access it. That of course if my memory serves me.
You should maybe serve your memory some fish, so that it could serve you better. Tongue

You've conflated several related concepts:

https://en.wikipedia.org/wiki/Extended_memory
https://en.wikipedia.org/wiki/Expanded_memory
https://en.wikipedia.org/wiki/Protected_mode

:OFF Topic still
Yeah, those links verify what I said.

https://en.wikipedia.org/wiki/CDC_6600 and search for "ECS" (Extended Core Storage)

Was this the first page swapping mainframe?
legendary
Activity: 2128
Merit: 1073
July 07, 2015, 03:08:37 PM
:really off topic but I so rarely can add to the discussions. Tongue

Actually in order to address memory out of the address range of the cpu bus a "page swap" method was used which had been used on mainframes for many years, this was called expanded memory and was in 16k chunks which was very slow. with the 286 line extended memory was introduced and the cpu had to go into extended mode in order to access it. That of course if my memory serves me.
You should maybe serve your memory some fish, so that it could serve you better. Tongue

You've conflated several related concepts:

https://en.wikipedia.org/wiki/Extended_memory
https://en.wikipedia.org/wiki/Expanded_memory
https://en.wikipedia.org/wiki/Protected_mode
https://en.wikipedia.org/wiki/CDC_6600 and search for "ECS" (Extended Core Storage)
legendary
Activity: 1764
Merit: 1002
July 07, 2015, 03:06:27 PM
If we talk about fluctuation in the prices of gold , that has always happened in the economy . What hasn't happened is the people who are comfortable with Gold investing their bucks , into equity or Bitcoin , for that matter. Gold is the safe secure return ,and supports the traditional mindset. It needs to be changed.

until or unless Bitcoin can manage to get into the hands of ppl worldwide, esp that African kid mining for gold, it won't become a digital gold.
full member
Activity: 182
Merit: 100
July 07, 2015, 02:47:19 PM
If we talk about fluctuation in the prices of gold , that has always happened in the economy . What hasn't happened is the people who are comfortable with Gold investing their bucks , into equity or Bitcoin , for that matter. Gold is the safe secure return ,and supports the traditional mindset. It needs to be changed.
pa
hero member
Activity: 528
Merit: 501
legendary
Activity: 1764
Merit: 1002
July 07, 2015, 02:28:31 PM
Cypherdoc, what are the latest updates regarding the block size debate ?

i'm not aware of any progress really.  you can follow bitcoin-dev mailing list for the details but everything i've seen there hasn't shown much progress.

may someone else who is really following it closely can comment.
legendary
Activity: 1764
Merit: 1002
July 07, 2015, 02:20:35 PM
as you know, even Gavin talks about this memory problem from UTXO.  and yes, i read the Reddit thread that resulted in which you participated and i'm aware that UTXO can be dynamically cached according to needs.
http://gavinandresen.ninja/utxo-uhoh

Gavin was insufficently precise. There is a reddit thread is full of people calling gavin a fool ( Sad ) for saying "memory" when he should have been saying fast storage.  https://twitter.com/petertoddbtc/status/596710423094788097

Why do you think it's prudent to argue this with me?

Okay, lets take a bet. Since you're so confident; surely you'll grant me 1000:1 odds?-- I'll give my side away to a public cause.

The question is "Is the entire UTXO set kept in ram in Bitcoin Core ever released?"

I will bet 3 BTC and, with the 1000:1 odds, if you lose you'll pay 3000 BTC (which I will to the hashfast liquidators, to return it to the forum members that it was taken from; which will also save you some money in ongoing lawsuit against you).

Sounds good?  How will we adjudicate?  If not, what is your counter-offer for the terms?

Let me chime in hear quickly, because I think Greg and I are talking about slightly different things.  My model was considering the time between the first moment that a pool could begin hashing on a blockheader, and when the previous block had been processed, a new non-empty block template constructed, and the hashers re-assigned to work on this non-empty block.  

It looks like this time, empirically, is 15 sec (F2Pool) and 30 sec (AntPool), based on these estimates.  

Here I suspect you're suffering from an excess of empiracisism without adequately devling into the mechenism.   You can directly measure that time time from input to minable on an actual node under your control and will observe the time is hundreds of times faster than your estimate. Why?   Miners don't magically know when their pool has new work, they'll get work in the first milliseconds and then grind on it some time before submitting returning work.  Even if the pool long polls them, it takes time to replace work. So what I suspect you're actually measuring there is the latency of the mining process...  which is consistent with what we've expirenced with P2Pool (5-20 second latencies from ASIC miners are common).

I noted you posted a result of a classification, did you run the same data through a simple logistic regression with prior size as the treatment? The intercept in the model would be interesting.

But indeed, these conversations have been conflating several seperate issues (latency vs throughput, etc.). Tricky to avoid that since they're all relevant.

but you haven't verified that f2pool or Antpool has increased their minrelaytxfee have you to minimize their mempool?
I have, they'd previously cranked it down, and were producing small blocks and were flamed in public.  They've since turned it back up.

Quote
remember, this whole mempool discussion was based off you responding to Peter's mathematics post the other day where you argued that the block verification times were only 80ms for a 250 kB block b/c tx's had been pre-verified after being passed around to all nodes across the network and didn't require re-verification by miners on the relay network and was therefore a refutation of his hypothesis of increasing block verification times (16-37sec on avg) leading to SPV mining.
As PeterR points out, they only need to wait for verification to actually verify (which they're not doing today), though they may have to wait longer to include transactions---- though I point out thats not fundimental e.g. no matter how big the backlog is you can produce a template sufficient to completely fill a block while doing no more work than handling a mempool of twice the maximum block size.  (by using a tiered mempool, though no one has bothered to implement this yet-- no one has even been complaining about how long createnewblock takes, due to the ability to produce empty blocks without skipping transactions).


well, you did claim some of it was in RAM; https://www.reddit.com/r/Bitcoin/comments/35asg6/gavin_andresen_utxo_uhoh/cr2za45
legendary
Activity: 1386
Merit: 1027
Permabull Bitcoin Investor
July 07, 2015, 02:17:30 PM
Cypherdoc, what are the latest updates regarding the block size debate ?
legendary
Activity: 1162
Merit: 1007
July 07, 2015, 02:09:38 PM
On the topic of block verification times, people on Reddit are saying this block (filled with one huge TX) took up to 25 seconds to verify:

https://www.reddit.com/r/Bitcoin/comments/3cgft7/largest_transaction_ever_mined_999657_kb_consumes/
legendary
Activity: 3836
Merit: 4969
Doomed to see the future and unable to prevent it
July 07, 2015, 01:53:44 PM
Whether 1MB is ideal or not, it's what we have.

As it happens, 1MB seemed to have been at least quite fortuitous for us, and I wonder if it were not somewhat well considered when Satoshi made the setting as opposed to the perception promulgated by some that he pulled a random number out his ass.

In retrospect, 1MB seems like a pretty ideal setting for the past history of Bitcoin and some distance into the future.  To me.

Exactly this.  Intentionally or not, 1MB turned out to be a serendipitous choice.  Now it has ossified and is ready for the next layers to be built on its solid foundation.

I favor Adam Backamoto's extension block proposal.

The 1MB blocksize limit reminds me of the old 640k limit in DOS.

Rather than destroy Window's interoperability with the rich/valuable legacy of 8088 based software, that limit was extended via various hacks sublime software engineering.

Before resorting to the nuclear option of a contentious hard fork, we should attempt to achieve the desired result with soft forks.

:really off topic but I so rarely can add to the discussions. Tongue

Actually in order to address memory out of the address range of the cpu bus a "page swap" method was used which had been used on mainframes for many years, this was called expanded memory and was in 16k chunks which was very slow. with the 286 line extended memory was introduced and the cpu had to go into extended mode in order to access it. That of course if my memory serves me.
legendary
Activity: 1135
Merit: 1166
July 07, 2015, 01:46:05 PM
there's a real battle going on at Bitstamp.  aren't they close to Greece?  i wouldn't be too pessimistic.

They are in Slovenia.  Not really too close to Greece from a European point of view.  I never heard talk about Slovenia (which is part of the Eurozone), so it probably is at least better off than Italy and Spain.
legendary
Activity: 1036
Merit: 1000
July 07, 2015, 01:17:14 PM
From the devs' perspective, they have to deal with a lot of non-technical people, so dismissing arguments based on a technicality becomes a go-to tactic. They might justify it based on the fact that the person is clearly completely off-base and has nothing useful to say. This tactic is easier than explaining why.

The problem comes when they use that to justify dismissing someone even when they can't really be sure if they're completely off-base. It becomes a kind of last-ditch "panic button" for when they want to avoid addressing something.
legendary
Activity: 1162
Merit: 1007
July 07, 2015, 11:53:55 AM
Quote from: Luke-Jr
Eligius does not do any SPV mining. Empty blocks are generated only after the previous block has been fully verified, but before the next block's transaction set has been calculated.


It may come down to how you defined SPV mining I guess he is saying they try to mine on top of valid blocks and not empty ones.but if you get lucky you may have empty blocks in a row?

Would that be SPV mining?

Let's call it "empty block mining" instead.  He's right that it's not strictly SPV mining if you've indeed verified the previous block, but I think people are interested in the behaviour of Miners in the time between when a miner could begin hashing on an empty block, and when the hashers are actually working on a new non-empty block.  

So then there's:

   (1) empty block mining (previous block verified but new transaction set not built).
   (2) empty block mining (previous block not verified).  

EDIT: it would be nice to make a diagram to visualize all the steps that take place in the mining process.
legendary
Activity: 1764
Merit: 1002
July 07, 2015, 11:42:07 AM
I was chatting with Luke-jr last night on Reddit, Eligius mines lots of 0tx blocks he said they don't SPV mine so it's just Kano is saying, they are just slow to transition.

Why would a pool ever mine an empty block if they weren't SPV mining (at least for a short amount of time while they validate the previous block and before they re-assign the hashers a new non-empty block to work on)?

Personally, I find it difficult to communicate with Luke-Jr because he will often argue based on a technicality due to his own personal definitions of some word, rather than talking about the thing everyone is actually trying to talk about.  It looks like he does this in other sub-reddits too, for example, he claims that the Pope is not the Pope.

Here is his answer to: I've notice eligius mining multiple empty blocks in a row, is that because they are SPV mining too, and if you know and it's open, what is the SPV mining policy?


Quote from: Luke-Jr
Eligius does not do any SPV mining. Empty blocks are generated only after the previous block has been fully verified, but before the next block's transaction set has been calculated.



It may come down to how you defined SPV mining I guess he is saying they try to mine on top of valid blocks and not empty ones.but if you get lucky you may have empty blocks in a row?

Would that be SPV mining?

yes, the defiinition is important.  all these guys are trying to shave milliseconds of the time to start hashing the next block after arrival of a block. 

before Luke, we had been talking about starting hashing of a 0tx blk immediately upon receipt of a "large" block, however that's defined by the pool to attempt to save the time of validating that large blk.  once validated, they sub out the 0tx with a blk with tx's and resume hashing.

Luke apparently gets a blk, large or small, validates it, then routinely sends out a 0tx blk to start hashing, just b/c he wants to save the milliseconds it takes to construct the tx template for a blk with tx's.  once constructed, he resends that blk out with the tx's.
legendary
Activity: 1372
Merit: 1000
July 07, 2015, 11:35:18 AM
I was chatting with Luke-jr last night on Reddit, Eligius mines lots of 0tx blocks he said they don't SPV mine so it's just Kano is saying, they are just slow to transition.

Why would a pool ever mine an empty block if they weren't SPV mining (at least for a short amount of time while they validate the previous block and before they re-assign the hashers a new non-empty block to work on)?

Personally, I find it difficult to communicate with Luke-Jr because he will often argue based on a technicality due to his own personal definitions of some word, rather than talking about the thing everyone is actually trying to talk about.  It looks like he does this in other sub-reddits too, for example, he claims that the Pope is not the Pope.

Here is his answer to: I've notice eligius mining multiple empty blocks in a row, is that because they are SPV mining too, and if you know and it's open, what is the SPV mining policy?


Quote from: Luke-Jr
Eligius does not do any SPV mining. Empty blocks are generated only after the previous block has been fully verified, but before the next block's transaction set has been calculated.



It may come down to how you defined SPV mining I guess he is saying they try to mine on top of valid blocks and not empty ones.but if you get lucky you may have empty blocks in a row?

Would that be SPV mining?
Jump to: