Pages:
Author

Topic: Removing incentive to mine in pools (Read 3097 times)

sr. member
Activity: 252
Merit: 250
January 19, 2014, 06:53:10 AM
#27
I'm still reading, I just don't think this conversation is progressing with just the two of us.
newbie
Activity: 27
Merit: 2
January 18, 2014, 10:28:34 AM
#26
You're kidding me right? You can't read through a few paragraphs of text?

Sounds like you actually aren't grasping my points. But that's ok, nothing we've been discussing is related to the original topic anyway (which was removing incentive to pool), why you didn't just start a new thread with this idea I don't know.

Anyway, it still sounds like you are relying on people being able to verify that the block they received came from a "long way away" or a "long way away" from the last block origin.

There are three ways I can think to do that.
Have everyone in the network ID-able (maybe with some way of signing blocks, which requires trust in signatures built over time, providing incentive to pool so you have consistent nodes).
Provide accurate time-stamps of when the block was generated as well as a uniform bandwidth network so you know the velocity of information over the network.
Have everyone have a complete map of the network, so they know how many hops away the block was generated.

So assuming we're able to use some way to find out accurately how old a block is or how far away it was generated:
What if blocks are released at the same time at opposite ends of the network? The bias of the clients on *both* sides of the network would be to accept the block that came from the other side of the network, and you would end up with the block just propagating in exactly the same way it does now, just on the other side of the network. We'd still have clients deciding on which block to take based on wider acceptance, only it wouldn't originate from where the block was generated.
And still how does this make it harder to develop multiple blocks in a row? I develop a block, it's released, the other side of the network picks it up. I develop another block, it's released, the other side of the network picks it up because it still looks "far away". Just because it's not picked up locally doesn't mean the network doesn't have to fight over it in the same way. I see no mechanism making it harder for someone to release multiple blocks in a row without IDs on the network, which I think would be a terrible idea.

So if what you're doing is encouraging people to accept older looking blocks, well nice. That's pretty much exactly how the current system works - the older block is more likely to have network acceptance and therefore more likely to have another block built off of it, making it the heavier version of the block-chain. Only you don't need to throw your block across the network.

And if what you're saying is you have to only connect to nodes that are across the network from you, that seems like a bad idea too. How do you discover those nodes? From the nodes that are closer to you? Can you verify those far away nodes are far away? Then what about scalability - not up but down. When you're first starting out the network is small, not the whole world is involved.

Anyway, I guess if you're not reading any of my responses then these concerns will go unanswered.
sr. member
Activity: 252
Merit: 250
January 18, 2014, 09:14:37 AM
#25
To be honest, your replies are almost too long to bother reading. It's a solid wall of text.

And it doesn't sound like you understand much about the technology, AND you're actually ignoring every point I'm making.

- You don't need expensive hardware to produce accurate time-stamps. Strict time-stamping would not need to be enforced.
- Ping-ponging blocks between two linked servers would incur a massive 'local difficulty' penalty, increasing their block time to such a point that a remote node would have more of a chance of elongating the true chain.
- I'm not punishing people who don't release blocks quickly enough. I'm actually encouraging nodes to pick up and start processing more distant blocks over the ones released closer to home (i.e. rewarding lower latency connections to try to balance difficulties against connectivity). This is what you haven't seemed to grasp throughout this entire conversation.

So without trying to be too rude, I'm not going to reply to you anymore. Instead I'm going to wait until there's more than just the two of us in this thread throwing this issue back and forth.

newbie
Activity: 27
Merit: 2
January 18, 2014, 08:17:07 AM
#24
Alright, now how does the rest of the network measure that a block has been passed "a long distance"?

What's stopping me from setting up two separate looking nodes on the same hardware and having it pass the block back and forth between those two nodes? Do you have to accumulate signatures from many different nodes to prove block validity? How does the network know when you have enough? Does it have to be signed by 51% of the network? You're opening yourself up to Sybil attacks like crazy it seems to me.

How does the rest of the network measure the "distance" between two nodes to verify they are far enough apart? Does it measure their connectivity in the network? What if those two nodes, running on the same hardware, just choose to connect to different other nodes - one connects only to nodes around the world and one only to local nodes.

And once again, how does this provide incentive to de-pool, or remove the pools mining power from the process of writing transaction history? The pool that is performing the 51% attack doesn't need to release the blocks in a way that looks like they're coming from that pool - just making 51% attacks harder to find. They can find a block, and decide not to release it to anyone, just shuttle it to a colluding server which is say "hey, look at this nonce I found that works if you publish it with these transactions", then that server publishes the P-o-W and it looks like it comes from the remote server, not the pool. The shuttling doesn't have to occur over the bitcoin protocol. Then the remote server releases it to the network instead - to the rest of the network it just looks like that server was a little (milliseconds) slow to publish - which could be due to literally anything, perhaps the router it's behind got backed up, they have a slow internet connection, etc. So if you then enforce that the blocks have to hit the first node within a certain amount of time after their time-stamp, then A) you have to make sure that the network has synced clocks, which is really really much more difficult than most people imagine, getting even second-accuracy on the clocks can be a challenge when you have to deal with competing traffic between servers (how accurate is your clock: http://time.is/, mine was 16 seconds behind, is that an acceptable difference in time? More than enough to shuttle a block around the world for release), and B) you are punishing people who have slow internet connections because they simply can't publish the blocks they find quick enough - leading to people buying massive hardware and massive connections and encouraging pooling.

So what if you include a signature with the timestamp, and say that that has to hash out correctly too (so you include that signature in the proof of work). Then you're back to trying to verify identities. Which could work, over a long time - you could use something like a Fawkes signature protocol inside the bitcoin protocol that built trust in signatures over time - the more a signature is used and verified to come from the same place over time the more it's trusted to not be used in multiple places. But, this actually encourages the same servers to publish blocks over time, otherwise everyone is at zero trust.

So, I think that your solution is actually encouraging big/pooled mining:
You need expensive hardware to produce accurate time-stamps, which the lay-person can't afford.
You need expensive hardware and contracts to have fast internet, so that the network doesn't accuse you of purposefully holding a block back a certain amount of time or attempting to shuttle it elsewhere for release, which the lay-person can't afford.
You need repeat blocks from the same person to build up trust in signatures over time, which makes it harder for someone who is just solo mining a block every now and then to get his block accepted in the network.
You still can't stop someone from setting up two servers (or N for that matter) which look very different to the outside world on the same hardware and using those to collude against the network - something a pool operator would have the power to do.
sr. member
Activity: 252
Merit: 250
January 18, 2014, 05:56:08 AM
#23
Quote
because delaying the release of blocks to the network to reduce local difficulty per block guarantees they will exceed the valid time limit.

How? What mechanism causes this? Are you arguing you want blocks to be released "quickly enough"? Are you arguing they should be released "slowly enough"? How can you be sure that a block will be found in your window?

Are you saying that miners have to release the block within a certain amount of time of the latest time-stamped transaction in their block? What about the fact that people generating transactions (on their smart-phones, etc, which I can walk up to anyone and ask for the time and see up to a minute or two difference with) may not have very well synced clocks? What about the fact that miners *don't* have to include the latest transactions in their block? Transactions accumulate because they don't have fees, I had a transaction that took multiple days to receive its first confirmation just because I forgot to include the fees while other transactions were verified no problem. Now you're saying the miners have to include the absolute latest transactions in their blocks so that they can verify to the rest of the network they're not spoofing time-stamps?

The bitcoin protocol already contains checks for block validity, one of which specifically relates to the block time stamps. This has nothing to do with including the latest transactions or not. It has to do with the time the block was picked up by the local node and the time it was relayed from the previous node.

If a pool wants to attempt a 51% attack, they need to release 51% of all the blocks over a given time period.

My 'proof-of-connectivity' would integrate checks and balances for 32 consecutive time stamps and bring in 'local difficulty' modifications to discourage consecutive blocks from local nodes in preference to those from more distant nodes.

The one problem I haven't gotten around is what would happen if 2 pools colluded to create a fast pipe between each other but were located on the opposite sides of the world, so that they could 'ping-pong' blocks between each other with low latency but guaranteed inclusion in their chain. Now, I believe that the timestamps for bitcoin are universal and not local and they'd have no way of preferring blocks from a particular node over another, so this would get around that issue.
newbie
Activity: 27
Merit: 2
January 17, 2014, 09:48:14 PM
#22
Quote
because delaying the release of blocks to the network to reduce local difficulty per block guarantees they will exceed the valid time limit.

How? What mechanism causes this? Are you arguing you want blocks to be released "quickly enough"? Are you arguing they should be released "slowly enough"? How can you be sure that a block will be found in your window?

Are you saying that miners have to release the block within a certain amount of time of the latest time-stamped transaction in their block? What about the fact that people generating transactions (on their smart-phones, etc, which I can walk up to anyone and ask for the time and see up to a minute or two difference with) may not have very well synced clocks? What about the fact that miners *don't* have to include the latest transactions in their block? Transactions accumulate because they don't have fees, I had a transaction that took multiple days to receive its first confirmation just because I forgot to include the fees while other transactions were verified no problem. Now you're saying the miners have to include the absolute latest transactions in their blocks so that they can verify to the rest of the network they're not spoofing time-stamps?
sr. member
Activity: 252
Merit: 250
January 17, 2014, 09:34:34 PM
#21
How long does it take to send a block from the US to China?

This is not an implementation aimed at counteracting collections of hashing power in pools, but to specifically address the release of long 51%-mined chains.

Net difficulty is already adjusted according to block times. Why can't an extra difficulty be built in and carried along according to inter-node latency  

Requiring a long valid chain of time stamps ensures 51% attacks can't happen because delaying the release of blocks to the network to reduce local difficulty per block guarantees they will exceed the valid time limit.

Even if a pool is selfishly mining and finding 1 block every minute, by the 3rd block they will have had local difficulty increase to a point where it wouldn't take 1 minute any more. Holding blocks to increase apparent latency whilst rejecting outside ones makes it more likely that the block chain will grow in the rest of the network instead.

And continued up and down spoofing of the time stamps would get caught by clock drift from the rest of the network and time hashes not matching.

All we need to stop 51% attacks is to prevent sequential blocks being found by single pools. This would achieve that.
newbie
Activity: 27
Merit: 2
January 17, 2014, 08:41:37 PM
#20
How would this stop anyone?

So your argument is stopping sequential blocks from the same node. Assuming you could stop them from spoofing timestamps (your example is them passing blocks back and forth, but really they could just hold blocks instead, they don't have to pass them back and forth to allow time to accrue), it *still* relies on people being able to see who generated the block. There is pretty much no way to enforce an identity check on the network, buying a VPS costs something like, 20 euro a month?! I can get 5 IP addresses for 20 euro a month, on a machine MUCH more than capable of delaying blocks a pre-calculated amount of time, then releasing them to random nodes in the network. Just take your pool, isolate it from the network (because they are isolated anyway, the individual miners in the pool can't tell the difference), and send the generated blocks through a different server that you control every time you get one.

You're trying to say it's hashed by the sender and the receiver - sure the receiver wouldn't verify a hash that they saw as "too low difficulty", but the sender could just withhold the block, and after a certain amount of time, when the time-delay has added enough psuedo-difficulty to the block, release it - no one in the pool has to know, no one in the network, ONLY the malicious pool-operator node.

I think the error comes in when you say that an error accrues over the 32-block check. What exactly is the mechanism for that? Blocks are found at random times, there is no way to predict when the next one will be found - you can only look at statistics and say "well, in the past it's been once every 10 minutes *on average*, so probably in the next 10 minutes with a certainty of 68%" or something. So your fake chain of 32 blocks * 11 minutes (because of the delay from the mean) doesn't and can't look weird to the network - that looks perfectly fine - perfectly random means that 50 heads in a row can and will come up in a game of flipping coins.

So you have the selfish miner finding ALL the blocks in an attempt to re-write transaction history. He finds one - network accepts, no problem (it's his first). He finds the next, waits just a minute and for some reason the rest of the network doesn't find one in that time (which is completely possible, block times are all over the place), then releases that block. The network accepts. What mechanism forces him to add the extra time he waited for the previous block into his next block? If you say his ID does, then we can agree there is no mechanism, seeing as how he can easily spoof an ID. So he mines his next block, based on the hash of the previous block, which was released at time X+x (X being the time he found it, x being the time delay before he released it - and therefore the time he time-stamped it at) - but the network doesn't see X+x, all they see is Y, which is the sum X+x, he never releases the info that he delayed the block. So he doesn't have to delay his next block 2x, he can just delay it x again, to the network they both look like legit blocks. The time doesn't add up, it's the same delay each time. So if you had someone with absolutely massive mining power on the network, who on average could discover blocks 2x faster than a normal miner (which we're already seeing quite a bit of, GHash.io reaching the 40%-ish mark recently), he could find a block super quick, then delay it as much as possible to the point that there is another miner who could potentially mine another block, then release that block with the correct difficulty (which includes the difficulty accrued in that delay).

Now, if what you're talking about is that a miner must generate a block, then release it to the network and wait for a certain number of other miners to sign it, then bring it back in and sign it again before he releases it and it's validated by the network - that's also easily spoof-able. He just sends it to a colluding server (one he owns, in fact it doesn't actually even have to be a server, could just be another node running on the same hardware as the original pool operator - and you could have several such nodes running on that server), the colluding server holds the data for a while (however long it takes), signs the block at the later time and "sends" it back - the data actually only travels a few micro-meters onto a different part of the same hard drive. There is nothing forcing the colluding server to send the block back as quick as it can, and there is nothing that can enforce the identity of the colluding server.

The way I see it there really can't be much benefit from encouraging lazy or slow transaction of data. Slowing down data is all too easy - delays and holding is completely figured out. Speeding up data? That's a hard problem, that's a problem that takes work, actual work, to solve. And that's what mining needs to be based on - actual work or at least actual proof that *something* has been sacrificed - you have to make it worth it that someone will sacrifice themselves for the network (the block rewards in bitcoins case). Nothing about delaying data is difficult.
sr. member
Activity: 252
Merit: 250
January 17, 2014, 06:54:56 PM
#19
To address your questions in reverse order:

1. A pool would still be able to solve blocks very rapidly behind a single node, it would just reduce the likelihood of them solving sequential blocks, which is what leads to 51% attacks. By encouraging them to solve a block served from a more distant node next and disadvantaging them from solving a more locally generated block, good network health is encouraged.

Local block --> Remote block --> Local block --> Remote block --> etc.

Trying to 'game' this system with a nearby server running a purposely faulty clock and just serving out locally generated blocks that have been stamped with this fake release time (because they can't re-stamp a block that's already been released or this will be detected, even if it is one of their own), this error will accrue over the 32 block check.

Example:

Block released from pool at t=0, received by 'fake lag node' at t=0 (they're right next to each other), fake delay introduced to add +1 and block is sent back to the pool. In order:

Sent by pool at t=0
Received by lag node at t=1 (fake lag of 1 added by server but they're right next to each other so really t=0)
Received back at pool t<1 (pool server has same time original as before, which now falls before that of the lag node).

And because this is all hashed and encoded by the receiver and sender, along with the hash of the 'local difficulty' level of the previous 32 blocks to confirm the timestamps/purported connectivity, this won't be successful - they would have to shuttle blocks back and forth between the pool and lag node with ever increasing time delays.

Because they only profit by getting their blocks accepted by the rest of the network, they would have to release them for external checking at some point before their time stamp exceeds the 70 minutes median limit or 90 minute heartbeat, but the spoof requires them to successfully solve and fake the timestamps on a chain of 32 blocks. At 10 minutes per block, they can't do this, even with a local difficulty adjustment of 0.

The accrued time difference through spoofing would therefore be seen as fake - a fake chain of 32 blocks * 10mins/block PLUS the additional spoof time of a few minutes exceeds the limits of acceptance for the rest of the network (it exceeds the 70 minute median limit and the 90 minute heartbeat). We could perhaps even just require a chain of 10 timestamps in this case.

2. If they don't accept transactions from the rest of the network, or have their blocks accepted by the rest of the block chain, they're not actually mining bitcoin. They can sit whirring away at their own blocks all they want, but they won't accomplish anything.
newbie
Activity: 27
Merit: 2
January 17, 2014, 04:31:47 PM
#18
Hmmmm, I'm liking the idea more, using the network and previously generated timestamps to validate the timestamps.....


But still, I don't think it's usable. What if you just decide to not include transactions past a certain point in your block? It encourages miners to not include newer transactions in their blocks so they have more chance of being accepted. Also, what if you generate a block then just wait? Sure, it's a game you could lose, but it encourages miners to think about the benefits of not supporting the network. All the current systems encourage miners to benefit the system as much as possible.


Even that way, how does this discourage pooled mining? How does this make people not gang up their computing power against the network?
sr. member
Activity: 252
Merit: 250
January 17, 2014, 04:11:01 PM
#17

Hmmmmm.....It's an interesting idea, but I still don't think it's enforceable. People would spoof timestamps by making them look *earlier*, not later, so it looks like they are less connected. And it's not relative to any block, because there is NO WAY you can enforce not spoofing a miners identity, so each time they can just make it a set amount of time before the block generation that they spoof, they don't have to get further and further in the past.

...I thought you said that you're rewarded for lower connectivity? So large differences = high 'local difficulty' added to the apparent difficulty?


It is enforceable because each block would have to contain the plaintext, hash values, and checksum hash for the timestamps from the last 32 nodes - these can be checked when a block is received and if the timestamps are incorrect (i.e. spoofed) the block would be rejected. I choose 32 because it makes a 51% attack against the timestamping very unlikely - they'd need to produce a chain of 32 consecutive blocks in order to spoof the stamps.

Furthermore, the previous 'local difficulty' can be hashed and included in the block header, and this is then compared to the list of stamps and the block's contents to check its validity when first received.

And rewarding for larger differences between the 'sent' and 'received' stamps is the same as rewarding for lower connectivity. Of course, these stamps are compared to the local node time, which is synchronised to the rest of the network anyway so it all evens out. We're not talking milliseconds of latency here, but more likely tenths or even whole seconds differences.
newbie
Activity: 27
Merit: 2
January 17, 2014, 01:26:42 PM
#16
Ok, I understand what you're saying.

Assuming you could enforce time-stamping accuracy (which I think you can to within a few milliseconds, good enough), what would this idea get us? How would it stop pooling?

Also, you have to decide ahead of time (before block generation) whether a block is difficult enough or not. So does the server generate a block that they "think" will be difficult enough then send it out? What if it's not?

Also, network loads change all the time, how can this be a consistent measure to use for basing difficulty on? And how different is the latency for two massive nodes vs the latency for two small nodes? Is there enough difference to base this difference in difficulty on?

** I just re-read your post, understand it better (don't believe much in erasing ideas though, so I left the last two paragraphs). **

It looks like you're saying enforce it on the receiving side, as in a server gets a block, looks at the time-stamp and then decides if they received it late enough to make the "added wait difficulty" enough to accept the block. Is this right?

Hmmmmm.....It's an interesting idea, but I still don't think it's enforceable. People would spoof timestamps by making them look *earlier*, not later, so it looks like they are less connected. And it's not relative to any block, because there is NO WAY you can enforce not spoofing a miners identity, so each time they can just make it a set amount of time before the block generation that they spoof, they don't have to get further and further in the past.

Quote
Summary: Using timestamps, create a hash of local node ID and local node time, compare this with the time the recent block was received. Large differences = low 'local difficulty' to add to background difficulty, small differences = high 'local difficulty' to add to background difficulty. This encourages block sharing with more distant nodes.

...I thought you said that you're rewarded for lower connectivity? So large differences = high 'local difficulty' added to the apparent difficulty?

I like the idea of using the network itself as proof of work or proof of something, but I just don't see it as enforceable. With proof-of-work we have a definite "you did this much *stuff*, here's a reward", and it's mathematically enforcable, whereas with networks it's too easy to spoof. Maybe if we found a way that forced you to connect to unique nodes every time to get larger rewards? But then you have to somehow stop spoofing, which I just don't think is possible.
sr. member
Activity: 252
Merit: 250
January 17, 2014, 12:25:00 PM
#15
Right, but I think it would be really easy to fake is the thing, where-as you can't fake things like one-way-function proof-of-work.

How do you measure connectivity in a way that can't be faked? Does the whole network try to ping that server? They could just delay the response to make it look like they're high-latency.

Besides, the two massive co-located servers could pool up really easily and present just one exit node to the network - then they only have to worry about making one connection look high-latency through faking.

Perhaps you could do something where if the same entity wins a two blocks in a row the reward is halved for each successive block, then brought back up to full reward if someone else wins a block - but that's not enforceable at all either. It's too easy to look like anyone else in a network though, they could just relay the block to a colluding miner and have that miner release the block for a small reward.

If the network was de-psuedonymized it would be possible, but then you lose a lot of the advantages of Bitcoin.

I haven't read your follow-up in detail yet, but again I don't think you've fully understood what I'm proposing.

No pinging required. It would just use a combination of the node ID, the time stamps for block received and transmitted, and the fact that time moves in a forward direction.

2 fast co-located servers would share a local node. Their timestamps would be near-identical. You could sign the block with an irreversible stamp of 'node ID & timestamp'. Any 'reversal of time' to try to spoof the system would be picked up.

The local connectivity biases the difficulty further as a 'local difficulty factor' ADDED to the network's general 'difficulty' factor which is a function of hash rate.

IE 'local difficulty' is a function of node ID and timestamps, compared to 'difficulty' in general which is a function of hash rate.

Following from this, nodes are rewarded with lower difficulty for blocks with higher latencies (larger differences between the local node timestamp and the stamp of when the distant node sent the block out) than those with lower latencies.

Summary: Using timestamps, create a hash of local node ID and local node time, compare this with the time the recent block was received. Large differences = low 'local difficulty' to add to background difficulty, small differences = high 'local difficulty' to add to background difficulty. This encourages block sharing with more distant nodes.

Spoofing of local stamps to get low local difficulty can be detected because at some point the local node time will drift forwards out of the window for block validity as the spoofing node would have to keep making their timestamps further and further into the future, because local 'time reversal' would be detected. Time reversals or timestamp volatility at the local node will also be flagged as suspicious.

There could still be shenanigans by changing the local node ID AND the timestamp simultaneously, but this would be risky because other nodes would still be relaying valid blocks between each other which then get processed and enter the chain in a normal fashion with valid times.
newbie
Activity: 27
Merit: 2
January 17, 2014, 04:58:10 AM
#14
Had another idea. Tell me if this should just be an edit to the previous post, not 100% on forum etiquette here yet (mainly I'm afraid this will be seen as self-bumping, when really I just have another idea to throw out there, but I also want the idea to be read by people who have read the previous ones already).

Why not make it so that the difficulty of the block you find has to scale with how many transactions you include in that block. Then small miners can work on generating 1- transaction blocks, and bigger miners can develop larger blocks with more transactions, but have to have higher difficulty requirements - then they get more transaction fees, as well as a larger block reward (base the block reward on how many transactions you include as well). Everything else about verifying transactions is the same, but this just allows even more granularity in the mining power required to mine the main chain.

We still want a target minimum difficulty to generate a block with zero transactions in it (would we even allow a zero-transaction block? I suppose it still adds weight and thus value to the chain, but the block reward should be relatively small), and that's the main difficulty that is adjusted up and down every so often - (maybe say take the quickest quartile of blocks generated from the last difficulty period and make it so that they average out to our minimum block time of 10 - 30 seconds or so  ---   or we could remove the time element from it a bit, make it based on transactions, we have a certain number of transactions we want verified per-block as a target, or a certain block-size (in kB) target, adjust the difficulty so that more or less transactions are included in each block). But then every transaction you include in your block increases the difficulty ever so slightly while increasing the block reward too.

EDIT: With an average transaction number per block target instead of an average time per block target, we could make it even more difficult to scoop up tons of transactions and throw it all in a big block by making it so that adding transactions to your block gets exponentially harder with the knee of the exponential right about at 1 standard deviation away from the target block-size. Then big miners would have even more incentive to move away from the main-chain blocks and mine on the super-chain, because they can make much more money there.

It would be the same idea as with the super-blocks, where a super-block has to have a difficulty that scales with the summed difficulty of the sub-blocks they combine, but with transactions instead. The parameters for how much the difficulty scales with number transactions would have to be figured out - but then we're rewarding people more for supporting the network more. EDIT: Perhaps we would have two difficulty parameters adjusted each time - the main difficulty scales with the average transaction number per block, and the sub-difficulty scales with the standard deviation of the transaction number per block. We could still keep a time component even, by letting it weakly influence the target average transaction number (so the average transaction number affects the difficulty, and the target time for blocks affects the target average transaction number - we could have just a simple oscillator function that draws the target block time back with more and more force as it starts to deviate more).

It would also effectively FORCE big miners out of the smaller main-chain group of miners - either that or they would have to generate lots of really small blocks which we could make less profitable. If the big miner comes down to mine the main chain, they'll be building up one of their really big blocks, then suddenly some of the transactions they were using to build that block would be swept out from under them, making them "start over". If someone had so much more mining power that they could generate a much larger block FASTER than some small miner could generate the small block, that would be a problem. Maybe make the difficulty scale up a ton after a certain number of transactions (say like, 1000 or something?) to where it becomes impractical to mine blocks with that many transactions when you could just be mining on the super-block chain?

Similar idea to set super-block difficulties could be used - the base difficulty (a zero-sub-block super-block) would be set by the maximum difficulty of any one sub-block in the sub-chain. But then adding sub-blocks to your super-block requires adding difficulty to that base difficulty. Problem is if mining power starts to desert the network - suddenly you don't have enough mining power to mine super-blocks because just that one sub-block is too big. Though it wouldn't stop the main chain, so I guess it's not actually that bad of a problem. And if mining power deserts the network then people will just go back to mining the sub-blocks and forget about mining the supers for a while, though they are still there. Probably fine to happen, they only add security but don't detract any by not being there.

Really it seems that the problem boils down to block rewards. It's a way to slowly release currency into circulation, but it's the largest problem with the mining algorithms it seems. If instead it was all based on transaction fees (as it will be eventually), I think it would be much simpler to program in incentives and keep a consistent system over time.
newbie
Activity: 27
Merit: 2
January 17, 2014, 02:08:58 AM
#13
Right, but I think it would be really easy to fake is the thing, where-as you can't fake things like one-way-function proof-of-work.

How do you measure connectivity in a way that can't be faked? Does the whole network try to ping that server? They could just delay the response to make it look like they're high-latency.

Besides, the two massive co-located servers could pool up really easily and present just one exit node to the network - then they only have to worry about making one connection look high-latency through faking.

Perhaps you could do something where if the same entity wins a two blocks in a row the reward is halved for each successive block, then brought back up to full reward if someone else wins a block - but that's not enforceable at all either. It's too easy to look like anyone else in a network though, they could just relay the block to a colluding miner and have that miner release the block for a small reward.

If the network was de-psuedonymized it would be possible, but then you lose a lot of the advantages of Bitcoin.
sr. member
Activity: 252
Merit: 250
January 16, 2014, 06:34:16 PM
#12
Exbuhe27 - thanks for considering my 'proof of connectivity' issue, but I think you're getting the wrong end of the stick. The network wouldn't REWARD low latency connections but actually PUNISH them, in a small way.

Two massive co-located mining servers would be rewarded less than the isolated miner in Mongolia who successfully releases a block.
newbie
Activity: 27
Merit: 2
January 16, 2014, 02:19:58 PM
#11
Damn newbie limits. Frustrating. But understandable. They should at least save my reply so that I don't have to type it again.

I see what you're saying now though.

I don't think it's enforceable. People in a mining pool don't have to be connected to the bitcoin network at all, just the pool "exit node". The bitcoin network only sees the exit node I think. The rest of the network just sees the measure of hashing power, and not how it's being generated. p2pool is different, but it's also not possible to enforce people to use p2pool, so something like that would actually *discourage* p2pool even more as the people in the p2pool would be "see-able" by the network right? Not sure, gotta look at the protocols more Tongue
newbie
Activity: 27
Merit: 2
January 16, 2014, 02:13:43 PM
#10
I thought about a proof of connectivity thing or something where you had to connect to X servers to learn a secret (shamirs secret sharing), or something.

But it would put heavy strain on the network. Really heavy strain I think. Which is good in the sense that it makes it "difficult", but it also slows down the rest of the internet, which sucks. It would encourage building high-speed links between places, but most people would abuse the crap out of it - build two miners super close, run virtual machines on them, hard-wired ethernet connections, etc......

But you could also say they have to have a "proof of volume" of flow, like they have to prove they provided a certain amount of connectivity for lots of people - effectively changing how people are using ISPs, they get money just for connecting, maybe per-kB fees, not the shitty ways they are doing it now. We could also say that they only get credit for some really advanced super secure encrypted protocol we develop so that it encourages people to only use encrypted communications?

What about servers then just sending random messages back and forth to each other to make it look like they sent lots of traffic? Probably could be rejected by the network on a "unique connection basis", like every time within the same 10 seconds a packet between the same two people is sent it's reward is cut in half? But then our protocol wouldn't be very good because the mid-points would know the end-points. Interesting ideas.....

Anyway, how does it fit specifically into what we've been talking about?
newbie
Activity: 27
Merit: 2
January 16, 2014, 02:03:55 PM
#9
Alright, first thanks for reading my posts, I know they're long.

I think you're right that I need to look into how the full-nodes/clients/miners all interact with each other more. I think writing an alt-coin would be a good way to do that.

Quote
Changing the protocol means changing the reference client.

In my view, they should split the client into a server (bitcoind) and a client mode.

The server would just verify transactions and blocks, and wouldn't be able to create new ones.  You pass it blocks and transactions and it tells you what is valid.

It simply defines what counts as valid transactions.

This would be a much simpler piece of software.

Making changes is hard though.  It has been described as redesigning a plane while in flight.

They want to keep risks as low as possible (which is reasonable).

A formal/official p2p mining pool system means that they don't have to update the official client.

Yeah, I figured this is pretty much what it would come down too. By separating the reference client into the two modes, do you mean that you can, as a participator in the network, run the client which doesn't store blocks only verifies the relevant ones to you OR run the server which stores and verifies blocks? Hmmm, that seems like it would be a good split to have - advantage being that you can update one without tinkering with the other as much yes?

Quote
So, you need to understand the protocol.  Maybe what you want needs a hard-fork change (fails backwards compatibility).  But, you should try to find a soft-fork way of getting your ideas accepted.

The interesting thing is this doesn't really *conflict* with the existing protocol, in that it doesn't change how current blocks are generated other than the target block-time (which maybe we don't have to adjust, though it could be better to do so I think). This just adds another layer of blocks on top of those blocks, then another, then another. So maybe the old clients (people who don't update) wouldn't actually notice the difference - except for the fact that we're now awarding block rewards for the super-blocks, which would look unspendable to the old clients - is there any way around that?

If my interpretation of splitting the reference client and server is correct, then that's all it would take to make my idea implementable, yes? You could have people running the client, who just participate in transactions, and they don't need to know about the super-blocks that are being generated giving their transactions more security - they just need to check that their current transaction is valid in the main-chain (the transaction chain), which is what the whole bitcoin setup already does. Then people running the servers/full-nodes are the only ones who would have to update their to support super-blocks (or other protocol changes that don't affect how transactions are viewed)?

Maybe we could make it so that the super-blocks generate a different coin? Not directly spendable as bitcoin but as an alt-coin on top of bitcoin with a different value system? Then it may as well be an alt-coin that quite literally only harvest the bitcoin block-chain, then it's own block-chain. But we would still have to make ties between them which force to depend on each others success, otherwise the bitcoin blockchain could say "screw this alt, I'm going to forget that part of the protocol" effectively reducing tons of hashing power and proof of work to nothing. It doesn't seem like a good idea to completely separate them.

Anyway, I think the benefit of this idea is that it provides incentive for really big miners to remove themselves from writing transaction history, but allows them to still solidify the written transaction history. This way smaller more distributed miners are writing the transactions (less chance of a 51% at that level if any entry level miner can consistently get rewards and doesn't need to pool), and the big miners then come in and harden the transactions. The levels above the first one would be pretty ok to pool up in even - and that's probably what we would see is entry level miners coming in, difficulty going up, then a bunch of them forming a pool and moving to the next level so that they can make more money, then more entry level miners coming in.

As far as referencing a money value of hash power - that was just an ideal. Instead you would have to base it on the hashing power of the network at each level I think (which would mean referencing the difficulty itself) - which may be a close approximation to dollar value in short time-scales. Ideally the difficulty to mine at the first level would scale with how much hashing power an entry level miner can buy, but that's unrealistic to calculate - so if we just shoot for adjusting the rewards so that the difficulty at this level stays fairly constant over time or follows some expected hashing power ease-of-access curve, maybe that's good enough. Which of course brings in the idea of playing around with rewards in the same way we play around with difficulty to provide incentive for benevolent action.

The next big issue is the inflation that this system brings. Is it too much? Inflation in this system would pretty much follow the fall in hardware prices - which maybe is a good metric, but the more I think about it the more it actually seems that hardware improves much faster than other sectors of the economy. Maybe that will slow down? (*needs researched*) Either way that means we're building economic policies into the coin itself (which bitcoin does a bit already by picking a deflationary track), and perhaps we need to think about what the *best* economic policy to take is. Having currency supply scale with industry doesn't seem bad. Should we be making these decisions though?

And finally, what about when we "run out of transactions" and 1st level blocks? The first super-chain will catch up really fast, probably within a couple of days, the next one will take longer, the next one longer. How long will it take until we are at a block so massive that it encompasses the entire blockchain? Can we play with how much more difficult super-chains are to prevent that from happening very quick? How much faster will transactions be in the future? How much faster will mining be? If we're trying to future proof a coin these seem to be questions to consider.


Edit: Damn, just realized I take way to long to write these things. I guess I had it open for like, 2 hours.
sr. member
Activity: 252
Merit: 250
January 16, 2014, 12:58:20 PM
#8
What if hashing difficulty was made to be a function of some measure of 'proof of connectivity' - the best-connected clients are slightly penalised vs. those with higher latencies?
Pages:
Jump to: