Pages:
Author

Topic: A block chain for real-time confirmations - page 2. (Read 9040 times)

legendary
Activity: 1526
Merit: 1134
I think the best way to make progress with this issue would be for somebody to look at the impact of using much faster block generation times.

10 minutes was chosen on the assumption of really large networks, really large blocks and thus potentially large propagation times. It's a tradeoff between convenience and avoiding wastage of mining effort.

But 10 minutes for block propagation is pretty huge. BGP updates propagate across the whole internet in less than one minute, as far as I can tell. And BitCoin is nowhere near large enough to see 10 minute propagation times today. It'd probably be possible to have one minute confirmations on todays network.

The questions to ask are:
  • Is 1 minute really better than 10? I think suspect there's really only two speeds that matter, "<5 seconds" and ">5 seconds".
  • Can floating transactions be made low-risk enough that it doesn't matter if they aren't included in a block right away?

legendary
Activity: 1372
Merit: 1002
So I have to ask, what is the problem trying to be solved?

I think the problem that this system tries to solve is in-place transactions without the need of a trusted third party.
administrator
Activity: 5222
Merit: 13032
The "insurance" provided by each confirmation increases with the total network computational power. It might be useful to wait only a half-confirmation or less if the network becomes so massive that one confirmation can protect even very high-value transactions. It's not very useful now.

I could see a separate chain running with a target time of perhaps as low as one minute, but a target time of seconds will be impossible due to latency. I don't think a separate chain is the answer, though: there would be no incentive to create blocks on it, and there are probably other methods that will work even better. The network might never be large enough to support this, though, and no changes are necessary to Bitcoin now, so it's not worth working on. By the time it becomes useful, solutions for dealing with fast transaction acceptance will already have been established.
legendary
Activity: 1708
Merit: 1010
So I have to ask, what is the problem trying to be solved?

I have to agree with this sentiment.  Each merchant can set his/her own threshold where they require X confirmations for any transactions over the threshold amount.  For example, anything below 1 BTC isn't worth anyone's time waiting for 6 or more confirmations.  As soon as the transaction shows up in the client, it can be considered valid.  Perhaps 2-5 BTC would require 2 confirmations, 5-50 BTC would require 4, etc.  It's not worth an attacker's computing power to try to double spend 1 BTC.  Anyone making a large transaction surely will have the patience to wait an hour or so for 6 confirmations.

Credit card charges less than $50 are not normally even checked at a Point-of-sale, such as a gas station, within a reasonable distance of the card owner's home.  Bitcoin permits a great deal better verification than the credit card companies provide the gas station owners.

And if the vendor is an online shopping vendor, there is no good reason to not wait for 6 confirmations before releasing an order for shipping.
hero member
Activity: 726
Merit: 500
So I have to ask, what is the problem trying to be solved?

I have to agree with this sentiment.  Each merchant can set his/her own threshold where they require X confirmations for any transactions over the threshold amount.  For example, anything below 1 BTC isn't worth anyone's time waiting for 6 or more confirmations.  As soon as the transaction shows up in the client, it can be considered valid.  Perhaps 2-5 BTC would require 2 confirmations, 5-50 BTC would require 4, etc.  It's not worth an attacker's computing power to try to double spend 1 BTC.  Anyone making a large transaction surely will have the patience to wait an hour or so for 6 confirmations.
legendary
Activity: 1708
Merit: 1010
There are many issues with this idea, most notably the lack of miners, the continued vulnerability to the 'finney attack', and the added bloat. However, this has inspired me. My suggestion is an optional extension to the protocol, which may or may not require a protocol switch if extensibility wasn't initially added to the block headers. In short: my idea is to add a parallel block chain to the existing one, but with combined headers.


There are some serious issues with such an idea, not the least of which is network latency, which is the main reason that the target interval is ten minutes to start with.  Another is the complexity added to the protocol.  And then I can't even see how it's necessary.  Transactions are complete in milliseconds, it's the confirmations that take ten minutes, which are way faster than how this is done by just about any other online method.  Credit card transactions are, likewise, nearly instant; but it can take up to 60 days before they are final.  So I have to ask, what is the problem trying to be solved?
hero member
Activity: 714
Merit: 500
I think I see what you're saying. It would be analogous to keeping a delta index in something like Lucene, right?
legendary
Activity: 1204
Merit: 1015
There are many issues with this idea, most notably the lack of miners, the continued vulnerability to the 'finney attack', and the added bloat. However, this has inspired me. My suggestion is an optional extension to the protocol, which may or may not require a protocol switch if extensibility wasn't initially added to the block headers. In short: my idea is to add a parallel block chain to the existing one, but with combined headers.

First off, we need to make sure that the "trading block" (hereafter referred to as "hyperchain") gets mined. One of the best ways to do this is to take a leaf from the mining pool book and make it so that the hyperchain is formed from low-difficulty full blocks, possibly with a difficulty target of somewhere between 5-10 seconds. The only difference between normal blocks and hyperchain-enabled blocks would be that hyperchain blocks would add a "previous hyperblock" field to the existing block headers.

When I said that hyperblocks would just be low-difficulty blocks, I meant it. Transactions would be added to the block as they are now and the transactions in the block would not be removed when a hyperblock is formed, as if hyperblocks never existed. The only difference is that hyperblocks would be transmitted to hyperchain enabled clients at lower difficulties than normal blocks. As like any other block-chain, when new hyperchain blocks come in, regardless of if it's a full block with a hyperchain header or just a regular hyperblock, the "previous hyperblock" field would be updated to reflect the end of the longest hyperblock chain, relative to the last full block with a hyperchain header. Hyperchain conflicts would be resolved much like any other block-chain.

It is my impression that the main reasons that we have the difficulty target so high is because of the bandwidth and storage that would otherwise be required. This most certainly will be high-bandwidth, but only people who can handle it would enable it. As for storage, purge old hyperblocks the same way that the transaction cache is purged right now.

Finally, here comes the magic of the hyperchain:
The hyperchain would be what is used to resolve double-spends. Once a coin is spent and confirmed on the hyperchain, it cannot be re-spent on the block-chain. For this to work, most miners (>50% computational power) would need to have a hyperchain-aware client, although the hyperchain feature need not to be enabled. For hyperchain-aware clients that don't have the hyperchain feature enabled, upon receiving a full block containing a coin already thought to be spent according to the client's transaction cache (or even just getting a block that contains a transaction that hasn't been received normally), the client would go back and download the relevant portion of the hyperchain, much like the future "simple" clients would do for the normal block-chain, to resolve the double-spend. If the full block is in conflict with the hyperchain, the full-block is rejected (unless, of course, the majority overrule and continue that block-chain).

At this point, the 'finney attack' is only effective against the hyperchain. However, the hyperchain would move so quickly that the merchant would be able to wait for the first hyper-confirmation.

Speaking of confirmations, the confirmation numbers for transactions on the hyper-chain should be represented as the greatest fraction of normal difficulty of hyperblocks after the transaction was seen in the hyperchain. For example, if a transaction has been seen in a hyperblock of .5 normal difficulty, and a hyperblock of .6 normal difficulty came, it's number of confirmations would show ".60". As for transactions that were chained behind a full block that also had hyperblock headers but weren't actually confirmed in the block, show something like ".99 confirmations".

I really hope this made some sense to you guys, since I'm clearly trying to explain something way over my head.
member
Activity: 98
Merit: 20
Normal "blocks" wouldn't generate coins anymore, since there is no target difficulty.  You could still have coin-generating blocks, but they would only contain the generating transaction, and have to meet a target difficulty.  Two generating transactions in parallel streams would conflict.  See my post re: generating transactions. 

It is a somewhat different system in that it shifts the burden of proof of work from a few block-computing machines, to anyone who wishes to record a transaction. 

If I get you... In your proposed system generating normal blocks would not be computationally intensive, but generating transactions would be.
No, I believe it's the other way around.

I think the problem is you and rfugger have different definitions of "normal" :-)

I think this won't hurt scalability if the 'headers only' protocol gets implemented. I think this idea might be worth exploring a bit more - I've been thinking along the same lines myself.

The proposal, as I understand it, is to have a "trading block" and a "mining block." The mining block would be the equivalent of what we currently have: it requires proof of work to be accepted by other peers, and it generates new bitcoins out of nothing, and claims any transaction fees for transactions included in the block.

The trading block, on the other hand, would consist of nothing but transactions (most likely a single transaction from the originator). Since it does not generate any new bitcoins, there is no work involved, and therefore no proof of work is required. And, since no work is required, it is not eligible to claim any transaction fees.

Think of the new trading block as an interim, semi-official confirmation. Kind of like a temporary driver's license.

The miners would still generate mining blocks, as usual. The mining block still includes transactions, and it is the mining block that is considered the official confirmation of the transaction.

To generate a trading block, the software first checks to see if it has received any trading blocks from its peers. If so, and if the root of the trading block chain is your node's highest mining block, then it uses the trading block as its previous block, otherwise it uses the highest mining block. It then creates the trading block and broadcasts it to connected nodes.

Sooner or later - most likely well before the next mining block appears - another peer will generate a transaction, and append its trading block to the one your node just sent out. When you receive that trading block, you have a confirmation - your temporary driver's license.

Multiple branches off the same mining block will occur, but that's OK because they won't (shouldn't) last more than 20 or 30 minutes. But, we'd have to figure out a way to encourage peers to use existing trading chains, or combine multiple trading block chains of depth 1 into a single chain.

Now, if yet another peer happens to receive the second trading block before it receives yours, then it broadcasts a request for your trading block and/or transaction.

As today, miners put incoming transactions in their memory pool, gather them into a block, then start working on the block, giving preference to transactions in trading blocks, and claiming any transaction fees associated with the transactions.

When nodes receive a new mining block, then they discard trading blocks where the transaction is in the mining block.

The incentive to use this system is to get quicker confirmation of transactions.

There are still a lot of gaps, and I'm sure there are problems I'm overlooking, but as I say, I think this is worth thinking through.
newbie
Activity: 27
Merit: 0
If I get you... In your proposed system generating normal blocks would not be computationally intensive, but generating transactions would be.
Isn't this going the wrong way? This places the weight of the system on merchants with the rest of the market participants simply acting as consumers.

Perhaps, but merchants will always pass along the cost of the system to their customers...

Quote
And what's to keep a couple of nodes from making dummy purchases from each other simply to create coin? That sounds more like banking as it exists today IMO.

You can't generate coins by making a dummy purchase.  You have to hold a coin to make normal transaction.  You would only be able to generate coins by creating a coin-generating block, which would require the same target difficulty as the current system.
newbie
Activity: 27
Merit: 0
0.01 of a confirmation on your network is actually worse than 0 confirmations on the Bitcoin network. With 0 confirmations you are protected somewhat by the TCP-level network, but anyone can reverse a transaction with 0.01 confirmation, overruling the TCP-level network.

Yes, as a payment recipient you would still have to wait until the cost of creating a conflicting parallel stream with a greater proof of work than your stream was greater than the value of the payment.  But for small payments, that might not be very long at all.  Right now, a 10-minute block is worth somewhere in the neighbourhood of $45.

It's a lot more data that lightweight clients will have to download. There's an 80-byte header per block that clients need. If this was required for every transaction, then lightweight clients would have to download about 17MB more data currently. This will become a lot more significant as the number of transactions per block increases. Also, if you have multiple "previous block" hashes in each transaction, you'll need headers that are much larger than 80 bytes. Normal clients will quickly lose the ability to send transactions.

Ah, I was only considering full clients.  Fair point.  With my proposal, if you didn't have enough bandwidth to receive all the transactions, you wouldn't be able to verify the proof of work in the chain.

There is no independent rate of block creation or target difficulty anymore.  Just one block per transaction, whenever a transaction happens, with whatever proof of work the submitter can generate.  So "blocks" will get generated continually.  ("Block" isn't a suitable word anymore, because it implies a grouping of transactions, which isn't the case here -- it makes more sense to just say "transaction" .)

OK then, how do you control the rate of currency inflation? Seems as though you are talking about a fundamentally different system that uses some of the same tools.

Normal "blocks" wouldn't generate coins anymore, since there is no target difficulty.  You could still have coin-generating blocks, but they would only contain the generating transaction, and have to meet a target difficulty.  Two generating transactions in parallel streams would conflict.  See my post re: generating transactions. 

It is a somewhat different system in that it shifts the burden of proof of work from a few block-computing machines, to anyone who wishes to record a transaction. 
administrator
Activity: 5222
Merit: 13032
I would argue that it would be valuable to have finer-grained confirmations for the sake of speed.  In a point-of-sale situation, for example, being able to confirm a small transaction quickly is very important.

0.01 of a confirmation on your network is actually worse than 0 confirmations on the Bitcoin network. With 0 confirmations you are protected somewhat by the TCP-level network, but anyone can reverse a transaction with 0.01 confirmation, overruling the TCP-level network.

Can you elaborate on how it hurts scalability?  Isn't it all the same data being passed around?

It's a lot more data that lightweight clients will have to download. There's an 80-byte header per block that clients need. If this was required for every transaction, then lightweight clients would have to download about 17MB more data currently. This will become a lot more significant as the number of transactions per block increases. Also, if you have multiple "previous block" hashes in each transaction, you'll need headers that are much larger than 80 bytes. Normal clients will quickly lose the ability to send transactions.
newbie
Activity: 27
Merit: 0
Further thoughts:

Anyone would be able to add proofs of work onto this kind of chain, continually making it stronger, by submitting zero-value transactions.  Peers who recently submitted transactions would be motivated to do this to ensure their transactions get sufficiently buried.  They could even pay others to do this for them if they wanted, which would be just like transaction fees in the current system.

You could still have coin-generating transaction blocks with a set target difficulty -- they would just include the one coin-generating transaction, though, but would need to have a reasonably current set of predecessors to be accepted.
newbie
Activity: 27
Merit: 0
That doesn't help much. You need to wait until your transaction is buried deep enough in the chain for an attacker with less than 50% of the network's computational power to be unable to reverse it. If there are many smaller, easier blocks, you'll just have to wait 300 confirmations (or whatever) until that target is reached. In other words, you always must wait for the network to do a lot of work after your transaction.

It would provide more fineness in desired confirmations; you could accept transactions with 2.5 Bitcoin-equivalent confirmations. But it's bad for scalability, and the increased fineness isn't valuable, IMO.

Good point.

I would argue that it would be valuable to have finer-grained confirmations for the sake of speed.  In a point-of-sale situation, for example, being able to confirm a small transaction quickly is very important.

Can you elaborate on how it hurts scalability?  Isn't it all the same data being passed around?

Thanks.
newbie
Activity: 27
Merit: 0
If you want a real time confirmation, patch the client to check if the sender really have coins (it may already do this). Once it's done, you don't need to really wait for inclusion in 6 blocks, or even in 1 block (but a transaction can currently be ignored by all blocks and forgotten, or depends on another transaction still not included, so 1 block may be a minimum of security)

Right, you should wait for at least one block to have pretty good reassurance.  That's a few minutes on average, not enough for point-of-sale, for example.

Wouldn't this be the same as having only one transaction per block?

Yes, with the addition of the capability of merging parallel streams.

How that merge would be made?

Instead of including the hash of a single predecessor block, you would include the hashes of all predecessor blocks.  For the proof of work, you would hash the root of a Merkle tree of all the predecessors, just like you do to include multiple transactions in blocks now. 

But if that were true then transaction volume would be pegged to the rate of block creation. The only way to increase circulation would be to have bigger transactions... Won't work.

There is no independent rate of block creation or target difficulty anymore.  Just one block per transaction, whenever a transaction happens, with whatever proof of work the submitter can generate.  So "blocks" will get generated continually.  ("Block" isn't a suitable word anymore, because it implies a grouping of transactions, which isn't the case here -- it makes more sense to just say "transaction" .)
administrator
Activity: 5222
Merit: 13032
That doesn't help much. You need to wait until your transaction is buried deep enough in the chain for an attacker with less than 50% of the network's computational power to be unable to reverse it. If there are many smaller, easier blocks, you'll just have to wait 300 confirmations (or whatever) until that target is reached. In other words, you always must wait for the network to do a lot of work after your transaction.

It would provide more fineness in desired confirmations; you could accept transactions with 2.5 Bitcoin-equivalent confirmations. But it's bad for scalability, and the increased fineness isn't valuable, IMO.
legendary
Activity: 1372
Merit: 1002

Each transaction includes its own proof-of-work.  If there are two parallel streams containing incompatible transactions, the stream with the greatest cumulative proof-of-work in its history becomes valid, and the other is dropped.

Ryan

Wouldn't this be the same as having only one transaction per block?

Yes. That's the way he reduces the delay in confirming transactions. There's less time between blocks/transactions.

My idea is to have transactions added to the block chain individually, rather than in blocks.  What would happen is that multiple transactions would simultaneously be added to the head of the chain, which ought to be fine as long as they are all mutually compatible.  All that's needed is for further transactions to be able to build on multiple compatible transaction streams and merge them, rather than just picking one.  It's possible that a transaction gets merged into several different streams, but that actually helps its odds should it ever end up in a stream that ends up getting rejected.

How that merge would be made?
hero member
Activity: 540
Merit: 500
It's a security protection.

If you want a real time confirmation, patch the client to check if the sender really have coins (it may already do this). Once it's done, you don't need to really wait for inclusion in 6 blocks, or even in 1 block (but a transaction can currently be ignored by all blocks and forgotten, or depends on another transaction still not included, so 1 block may be a minimum of security)
newbie
Activity: 27
Merit: 0
I think the block chain is the most brilliant, elegant way I've seen for independent machines to form a consensus.  My main issue with it is the delay in confirming transactions.

My idea is to have transactions added to the block chain individually, rather than in blocks.  What would happen is that multiple transactions would simultaneously be added to the head of the chain, which ought to be fine as long as they are all mutually compatible.  All that's needed is for further transactions to be able to build on multiple compatible transaction streams and merge them, rather than just picking one.  It's possible that a transaction gets merged into several different streams, but that actually helps its odds should it ever end up in a stream that ends up getting rejected.

Each transaction includes its own proof-of-work.  If there are two parallel streams containing incompatible transactions, the stream with the greatest cumulative proof-of-work in its history becomes valid, and the other is dropped.  Therefore peers are motivated to build on as many valid transactions as they know, to decrease the odds of their stream being dropped, meaning they will want to merge all the compatible parallel streams they know of in order to maximize the proof-of-work in their new stream.  Peers will also be motivated to include as much proof-of-work as they can in their own transactions, for the same reason.

One benefit of this arrangement is that it doesn't need transaction fees or any other reward to motivate third-party block computations.

Thoughts?

Ryan
Pages:
Jump to: