Pages:
Author

Topic: Blink - The most scalable alternative to blockchain - page 2. (Read 1058 times)

newbie
Activity: 1
Merit: 0
What do you have with the site?
RNC
newbie
Activity: 42
Merit: 0
That's hardly an analysis. This is core of your consensus protocol, you can't gloss over it with a throwaway paragraph.

With a horizontal consensus like this (as opposed to a vertical chain of consensus, like a PoW chain) it is possible for the consensus to stall completely due to inability to reach an agreement. You've allowed a maximum of two rounds of consensus to occur, so what happens when both of these fail to reach an agreement? Why can't that happen?

Hi, Much respect and I like many of your posts here!

I think what he is calling "lockers" is what's needed and it's a cluster or coordinator nodes acting as a single unit
without any of them gaining total control.

without this the network clutter becomes far to high on scaleable systems so you have to have a compromise
and sending one transaction to 20,000 nodes so they can argue about it is pure madness anyway and should be
limited down to something like a hundred nodes based on the public address of the wallet so in effect the address
itself becomes like an IP-Range within the network if you follow what I am saying.

I award you one credit for your critical thinking
newbie
Activity: 43
Merit: 0
This project is definitely on my watchlist. Good luck to you guys!
newbie
Activity: 37
Merit: 0
Having to resort to an ICO already shows some technical flaws in my opinion.

The supply should be put out there automatically in a more elegant way than some ICO, which is why I think Bitcoin is superior already to this project.

Also a project doesn't work until it has been attacked by governments numerous times and failed at each attempt (every hardfork attempt on Bitcoin has been governments trying to screw it up). It must also held a couple billion worth for a couple years without no hacks.

Until then no amount of whitepapers is going to convince me to sell my BTC to buy any other so called "better alternative to Bit[Suspicious link removed]d luck with your project in any case.
Hey, I agree with you that there's no reason for most people to put money in a project just because of a whitepaper which very few people can actually go through. That's why we just want people to try to understand the algorithm and give us feedback now.

Most of the Bitcoin supply has already been distributed to less than 0.1% of the world's population, I don't think that's a very elegant delivery mechanism though. Neither would an ICO solve this issue, which is why we first want to get some seed investment from qualified investors (that's why we're not posting here to raise anything). In a competitive market, having the best tech is not enough to win, you still need Marketing and PR. Bitcoin is not that complicated to implement compared to our protocol, and came at a time when you didn't need marketing budgets to get adoption.

After that, we want to do a long-term coin distribution, where as many people have access to it, while allowing us to raise capital for development. Calling it an ICO is a bit of a stretch, but there's no great terminology since everyone seem to just try to raise as much money as possible quickly. Frankly, we don't know yet the details of this phase, but it's a separate problem from the consensus protocol, that we can provide details on later.
legendary
Activity: 1372
Merit: 1252
When is the ICO out?
Our intention is to have an ICO as quickly as possible, but first we need to raise seed capital and to build a team of great advisors/build a community around the project. We believe this project has top 10 potential, so we don't want to rush things.

Having to resort to an ICO already shows some technical flaws in my opinion.

The supply should be put out there automatically in a more elegant way than some ICO, which is why I think Bitcoin is superior already to this project.

Also a project doesn't work until it has been attacked by governments numerous times and failed at each attempt (every hardfork attempt on Bitcoin has been governments trying to screw it up). It must also held a couple billion worth for a couple years without no hacks.

Until then no amount of whitepapers is going to convince me to sell my BTC to buy any other so called "better alternative to Bitcoin".

Good luck with your project in any case.
full member
Activity: 351
Merit: 134
Agree that the paper is hard to read, and that's partially unavoidable due to the complexity of the algorithm, and how all the parts are actually needed to work together. What exactly would you say is babble?

We haven't seen anyone understand the way the system works in under a day (including any of us), and you started making comments about an hour after we posted, so pretty sure you just skimmed through the paper.
It's understandable that you may not have the time to do a rigorous analysis, we're just some guys asking for advice on a forum, but what's the point in asking for attack vectors of a thing you don't understand?

We've actually pointed you to page 31 in a previous post, where we explicitly state that a majority of the network needs to agree on the state within a certain number of rounds, or else it's considered void, penalizing nodes essentially. Can you say why this doesn't say that consensus will freeze in the worst case?

There are a lot of other bits of analysis we could have added to the paper, but it would have made it over 100 pages long, without anything essential to understanding the core, since we've seen these naturally clear-up for the people that understood the core protocol, realizing that those issues apply to blockchain-based consensus and not to our algorithm.

I know you may be used to people posting crap algorithm that are easily dismissed by showing their obvious holes, and it's understandable why you'd fire some automatic arguments, they're usually valid. Still, I hope you can see that's not the case with us and that we're working on different assumptions than blockchain consensus developers.

Most of the time spent in the paper is talking about trivial things. Consensus is the hard bit, IMO the entire paper should be about that, the possible attack vectors, how it responds etc. You've made a lot of wild claims in there, which rang alarm bells for me, such as solving the NaS problem and even the claim about byzantine tolerance is hard to verify because of the way the paper is written.

Take a look at Ripple's last ledge close paper; that is clear and concise, and your design shares a lot of commonalities with that. 
newbie
Activity: 37
Merit: 0
Nothing the lockers say finalizes the state. A locker signed transaction needs to be confirmed by a majority of nodes to be accepted.
Like previously stated, consensus is delayed until approved by the majority, even with the risk of voiding the round. The fact that you keep mentioning a fork means that you're not very clear on how the algorithm works. Did you understand this from the consensus section? It could be the paper is not clear on it, but please try reading it again.

To be honest, the paper very hard to read. A lot of babble, and hardly anything about the attack vectors, or failure modes.

I'd like to see a description of how the consensus will freeze in the worst case, rather than forking in the case of a network partition, for example.

Agree that the paper is hard to read, and that's partially unavoidable due to the complexity of the algorithm, and how all the parts are actually needed to work together. What exactly would you say is babble?

We haven't seen anyone understand the way the system works in under a day (including any of us), and you started making comments about an hour after we posted, so pretty sure you just skimmed through the paper.
It's understandable that you may not have the time to do a rigorous analysis, we're just some guys asking for advice on a forum, but what's the point in asking for attack vectors of a thing you don't understand?

We've actually pointed you to page 31 in a previous post, where we explicitly state that a majority of the network needs to agree on the state within a certain number of rounds, or else it's considered void, penalizing nodes essentially. Can you say why this doesn't say that consensus will freeze in the worst case?

There are a lot of other bits of analysis we could have added to the paper, but it would have made it over 100 pages long, without anything essential to understanding the core, since we've seen these naturally clear-up for the people that understood the core protocol, realizing that those issues apply to blockchain-based consensus and not to our algorithm.

I know you may be used to people posting crap algorithm that are easily dismissed by showing their obvious holes, and it's understandable why you'd fire some automatic arguments, they're usually valid. Still, I hope you can see that's not the case with us and that we're working on different assumptions than blockchain consensus developers.
full member
Activity: 351
Merit: 134
The problem that you're facing is that of network partition combined with the fact that you allow a subset of lockers to participate in the sequence of processes that finalise the state.

You can end up with a fork if you allow a subset of all allocated lockers to finalise the state, due to network partition. If you require a majority to participate, you will get a stalled consensus until the partition resolves itself.

Nothing the lockers say finalizes the state. A locker signed transaction needs to be confirmed by a majority of nodes to be accepted.
Like previously stated, consensus is delayed until approved by the majority, even with the risk of voiding the round. The fact that you keep mentioning a fork means that you're not very clear on how the algorithm works. Did you understand this from the consensus section? It could be the paper is not clear on it, but please try reading it again.

To be honest, the paper very hard to read. A lot of babble, and hardly anything about the attack vectors, or failure modes.

I'd like to see a description of how the consensus will freeze in the worst case, rather than forking in the case of a network partition, for example.
newbie
Activity: 37
Merit: 0
The problem that you're facing is that of network partition combined with the fact that you allow a subset of lockers to participate in the sequence of processes that finalise the state.

You can end up with a fork if you allow a subset of all allocated lockers to finalise the state, due to network partition. If you require a majority to participate, you will get a stalled consensus until the partition resolves itself.

Nothing the lockers say finalizes the state. A locker signed transaction needs to be confirmed by a majority of nodes to be accepted.
Like previously stated, consensus is delayed until approved by the majority, even with the risk of voiding the round. The fact that you keep mentioning a fork means that you're not very clear on how the algorithm works. Did you understand this from the consensus section? It could be the paper is not clear on it, but please try reading it again.
full member
Activity: 351
Merit: 134
In the case you presented, if 10/20 lockers appear offline and later they try to broadcast the transactions for which they were the assigned lockers, the other nodes would simply ignore those transactions. Honest nodes always ignore transactions that were signed more than 2 rounds in the past. But indeed, there could be a conflict if the lockers come back online in the middle of the following round.

After the transaction broadcasting phase follows the sync & commitment. During this second phase nodes solve any kind of inconsistencies, e.g. revert double spends. Only after syncing there is a commitment vote on the global state some X rounds in the past. Anything that happened during those last X rounds is still subject to change through the sync process. A fork can only happen if the network doesn't agree on the commitment state.

As for the propagation times, check out this screenshot from the paper you linked: . Our transactions currently have ~300 bytes, that's why the propagation times of Blockchain based protocols don't apply to us.

The problem that you're facing is that of network partition combined with the fact that you allow a subset of lockers to participate in the sequence of processes that finalise the state.

You can end up with a fork if you allow a subset of all allocated lockers to finalise the state, due to network partition. If you require a majority to participate, you will get a stalled consensus until the partition resolves itself.
newbie
Activity: 37
Merit: 0
When is the ICO out?
Our intention is to have an ICO as quickly as possible, but first we need to raise seed capital and to build a team of great advisors/build a community around the project. We believe this project has top 10 potential, so we don't want to rush things.
full member
Activity: 406
Merit: 174
When is the ICO out?
newbie
Activity: 37
Merit: 0
What about in the case I've just presented, though? 10/20 lockers appear off-line, but really they're just delayed such that the round closes with the first 10/20, but then the other 10/20 also publish a round close for the same round, leading to a fork?

Here's your reference for propagation times: http://www.tik.ee.ethz.ch/file/49318d3f56c1d525aabf7fda78b23fc0/P2P2013_041.pdf

In the case you presented, if 10/20 lockers appear offline and later they try to broadcast the transactions for which they were the assigned lockers, the other nodes would simply ignore those transactions. Honest nodes always ignore transactions that were signed more than 2 rounds in the past. But indeed, there could be a conflict if the lockers come back online in the middle of the following round.

After the transaction broadcasting phase follows the sync & commitment. During this second phase nodes solve any kind of inconsistencies, e.g. revert double spends. Only after syncing there is a commitment vote on the global state some X rounds in the past. Anything that happened during those last X rounds is still subject to change through the sync process. A fork can only happen if the network doesn't agree on the commitment state.

As for the propagation times, check out this screenshot from the paper you linked: https://imgur.com/a/PcwwT. Our transactions currently have ~300 bytes, that's why the propagation times of Blockchain based protocols don't apply to us.

full member
Activity: 351
Merit: 134
We're trying to prove in the paper that actually there's no such case where a fork happens involuntarily.
That's the purpose of the synchronization phase, to make sure that forks happen only when nodes explicitly want them to happen. Nodes will keep making round proposals until they're sure that they've got a consensus. If nodes can't agree within a certain number of rounds, it's actually in their interest just to void the round.
If you're aware of a specific case when an involuntary fork can happen, please let us know.

Do you have any reference for the 15s gossip time? All of our testing showed much lower numbers in the order of a few seconds at most, if all you needed to gossip what in the order of a few hundred bytes. Blocks are much larger though, and that's why I'd imagine a disparity could come from.

What about in the case I've just presented, though? 10/20 lockers appear off-line, but really they're just delayed such that the round closes with the first 10/20, but then the other 10/20 also publish a round close for the same round, leading to a fork?

Here's your reference for propagation times: http://www.tik.ee.ethz.ch/file/49318d3f56c1d525aabf7fda78b23fc0/P2P2013_041.pdf
newbie
Activity: 37
Merit: 0
I understand that, but what about the round itself? Surely there's a case where a given round can be submitted with a separate set of transactions, leading to a fork?

Btw, worldwide network propagation using gossip is on average 15s, that's why ETH's block time is around there.

We're trying to prove in the paper that actually there's no such case where a fork happens involuntarily.
That's the purpose of the synchronization phase, to make sure that forks happen only when nodes explicitly want them to happen. Nodes will keep making round proposals until they're sure that they've got a consensus. If nodes can't agree within a certain number of rounds, it's actually in their interest just to void the round.
If you're aware of a specific case when an involuntary fork can happen, please let us know.

Do you have any reference for the 15s gossip time? All of our testing showed much lower numbers in the order of a few seconds at most, if all you needed to gossip what in the order of a few hundred bytes. Blocks are much larger though, and that's why I'd imagine a disparity could come from.
full member
Activity: 351
Merit: 134
There's no way to create a fork on those transactions, since the majority of the network needs to receive them in the next round at most, to confirm them. Think of lockers as a pre-validation step, gatekeeper to the system if you will. Since those transactions were not confirmed by a majority of the network, that means that those transactions will never be accepted, and everyone the applied them will undo them.

Between the time a transaction is signed by a locker and confirmed by the network, it's essentially pending. We expect running a full node to be something that only servers with decent processing power and good internet should do, so delays are rare. The latency between almost any pair of servers with decent internet in the world is less than 300ms. Even if delays happen between some pairs of nodes, there would have to be a global internet problem in order for a majority of the network to have these issues.

In essence, we want to optimize the system as much as possible for the average case, while still being robust for any worse case.

I understand that, but what about the round itself? Surely there's a case where a given round can be submitted with a separate set of transactions, leading to a fork?

Btw, worldwide network propagation using gossip is on average 15s, that's why ETH's block time is around there.
newbie
Activity: 37
Merit: 0
Our entire algorithm is focused around the idea that independent transactions can be treated independently. So in that round all the other transactions will be accepted, except those that were performed on accounts which had unresponsive lockers.

Ok, so, 10/20 go through in this round.

...What happens when the 10 which were offline, weren't actually offline at all, but just delayed due to latency and they produce a fork of the just submitted round with the last 10/20 transaction in it?

There's no way to create a fork on those transactions, since the majority of the network needs to receive them in the next round at most, to confirm them. Think of lockers as a pre-validation step, gatekeeper to the system if you will. Since those transactions were not confirmed by a majority of the network, that means that those transactions will never be accepted, and everyone the applied them will undo them.

Between the time a transaction is signed by a locker and confirmed by the network, it's essentially pending. We expect running a full node to be something that only servers with decent processing power and good internet should do, so delays are rare. The latency between almost any pair of servers with decent internet in the world is less than 300ms. Even if delays happen between some pairs of nodes, there would have to be a global internet problem in order for a majority of the network to have these issues.

In essence, we want to optimize the system as much as possible for the average case, while still being robust for any worse case.
full member
Activity: 351
Merit: 134
Our entire algorithm is focused around the idea that independent transactions can be treated independently. So in that round all the other transactions will be accepted, except those that were performed on accounts which had unresponsive lockers.

Ok, so, 10/20 go through in this round.

...What happens when the 10 which were offline, weren't actually offline at all, but just delayed due to latency and they produce a fork of the just submitted round with the last 10/20 transaction in it?
newbie
Activity: 37
Merit: 0
There's no way to create inconsistent transactions like this, the worst that could happen is that a locker signs a valid transaction that doesn't have time to be broadcast to the network.

What happens when lockers aren't available to sign for their allotted accounts?

Say we have 20 transactions from 20 accounts, being allocated to 20 lockers, and 10 of them are offline?

The accounts need to wait for the next round, if the transactions are not signed by the lockers they are invalid and won't be accepted by the network.

So that round closes empty, or we get 10/20 transactions in the round?
Our entire algorithm is focused around the idea that independent transactions can be treated independently. So in that round all the other transactions will be accepted, except those that were performed on accounts which had unresponsive lockers.
full member
Activity: 351
Merit: 134
There's no way to create inconsistent transactions like this, the worst that could happen is that a locker signs a valid transaction that doesn't have time to be broadcast to the network.

What happens when lockers aren't available to sign for their allotted accounts?

Say we have 20 transactions from 20 accounts, being allocated to 20 lockers, and 10 of them are offline?

The accounts need to wait for the next round, if the transactions are not signed by the lockers they are invalid and won't be accepted by the network.

So that round closes empty, or we get 10/20 transactions in the round?
Pages:
Jump to: