PART 2
Fuzzy: Now that the witnesses are not connected to an individual who needs to campaign for any specific reason – it's now a purely technical role, neutral in politic. It seems to be that there's no need to for us to worry about anonymous witnesses. Is this the case? Would anonymous witnesses, like someone using a VPN, would that be more beneficial or would there be downsides?
Bytemaster: I think it's beneficial to have one or two anonymous witnesses. Not enough that they could collude to be a danger but enough so that there's at least somebody who's still an elected witness who isn't taken out with all the other raids. If that person were able to produce enough blocks to recover the blockchain in a timely manner. Versus having everyone go out, and then, "Who's in charge?" right? [With one or two anonymous witnesses] we don't need to have an entirely new election before we begin [again, after such an attack].
One or two is probably a good idea but other than that [witnesses] should probably be well-known. And this is another issue I'd like to bring up. People say, "If they're publicly known they can be denial-of-service attacked." Just because the person behind the server is publicly known doesn't mean the server or its IP address if publicly known of even directly on the network. Just because a witness is public doesn't mean the server is public. Just because you can take out an individual that was elected doesn't mean that you can take out their server. It's entirely possible for someone arrange to have their server set up through several other people, with no ties to them directly, but for which they are responsible and control. In that particular instance, even if the government raided them, shot them – they still wouldn't know where the server was to shut down the witness. That type of thing is entirely possible and we need to think about those types of solutions versus the naive approach of, "Let's just add more witnesses because that will make it more robust."
At the end of the day, more witnesses means less voter attention on each witness, lower quality per witness and it comes down to a basic engineering problem: Do you have 100 unreliable parts or 3 very reliable parts? The probability of failure is based on how those parts all combine. There's also ability to coordinate the speed at which you respond. I bring up all these points, because from a technical perspective, 17 witnesses is more than enough redundancy to protect against technical failure and it's probably sufficient redundancy to protect against government attacks if the witnesses are located in a handful of different jurisdictions.
The cost versus risk needs to be measured. People are very, very bad at estimating probabilities. People buy lottery ticks on the mistaken belief that the probability of winning is greater than it actually is. People avoid flying, and yet drive, because they think that driving in inherently safe compared to flying when we know that the probabilities of all these things are opposite [our intuitions]. We underestimate the extreme cases and overestimate the lower ones. If we keep those types of things in mind, it explains a lot of the irrationality in the perception of people regarding the risks and the costs. An example is insulation in your home, if you have no insulation you have a very inefficient home, you lose a lot of heat. Put in the first little bit of insulation and it makes a huge difference. But eventually you can spend $1,000,000 adding insulation to your home and it makes no difference whatsoever in the ability of your home to retain heat. There's the same thing with security, eventually there's this point of diminishing returns, where you're adding cost yet getting no benefit. It gets more and more expensive for less and less benefit. That's what we need to keep in mind in all aspects of the system.
As I say these things, I am not arguing for centralization. I want a robust system that's going to serve the purpose of securing life, liberty and property and not be unnecessarily burdened. That's where I'm coming from. That's what I'm trying to achieve. I hope that I'm not losing people who are big fans of decentralization. I am a huge fan of it. But decentralization is a technique, and I don't want to get lost in a technique. I want to stay focused on the why, the goal. Are we achieving the goal? I think that's what we're trying to do with BitShares, and that's what sets BitShares apart from a lot of other systems.
Fuzzy: During my IT courses we'd talk in terms of project management and IT security, you're basically talking about the risk assessment matrices talked about in class to IT students. You have to find that fine line. You can always overdo security. The question is, "Is it worth it?" There might be some instances where, yes, the benefits outweigh the costs but others where the opposite is true.
Another question, "What are the best countries in the world for liberty and network speed?" I don't know if you've done any research on this.
Bytemaster: I'm too swamped with technical stuff to do research into all the political stuff. I'm kind of stuck in the United States. I hope someone else will do that research.
I would like to bring up another point: Just because you have a product that meets all the technical specifications and [makes] all the proper risk-reward trade-offs to maximize the value of the system, doesn't mean that it'll necessarily be the best-selling thing. This is what got interesting about the debate. Why do people buy a car with 300 horsepower when the speed limit's and reckless driving laws mean that you can get by with a car with a 120 horsepower engine with better gas mileage. [That] type of irrational feel-good value is something we should contemplate. That impacts how well we market something. A lot of companies do things with their products that have no technical [or functional] benefit, but they cause the product to sell better. An example of this is in the 50's and 60's magazines that had inappropriate material on their covers would be wrapped in paper. Some companies realized they could sell more of a legitimate magazine that didn't have that type of stuff if they wrapped their own magazines in similar paper. The paper wasn't providing any purpose other than [making it seem] like it was forbidden, therefore it drove interest and drove sales. There are other situations where companies do stuff that even has a negative impact on performance simply because it sells better.
I don't know the answers to these questions. I think it's a market-research thing. We all have ideas about what would sell better to us, but we have personal biases making it hard to tell what will sell best to the masses and to different target audiences. We know who the loud [and] vocal people are, but do they actually carry any weight? Or do fundamentals matter, like profitability. If you can make the blockchain profitable, is that more enticing than saying, "Well, we're not profitable, but we're super secure." Those are the types of [conversations] we need to have.
I mention all this because the people here on this call are going to be the voters. They're going to have to vote on who to hire as witnesses and committee members and as workers. These are things that you need to think about and consider. My job in these mumble sessions is to help provide perspective and help educate so we can all make better decisions and not just vote out of gut-reflex. The more educated the voters are the better the system will be.
This brings me to another point, we've got a fourth role in the system that hasn't really been talked about much because it's not an explicitly enumerated role. We've got the witnesses, the committee members and the workers – we've talked about those [roles] lots. We also have the proxy voters. In political terms you'd probably call them 'delegates'. These are the people to whom most people have set their account to proxy-vote through.
You can view these as the mining pools of Delegated Proof of Stake. We want as many of those people as possible. They can meet and make decisions. If we had 100 or 150 people that controlled 85% of the indirect vote, they would be able to quickly discuss policy and make smart choices about who all the other players in the system are. We can have as many of those as we want. We don't have to centralize on a handful of witnesses or committee members. Instead we can pick leaders of communities and businesses and break it up as much as we want to get as many votes concentrated into those hands as possible. And have those people decide how much technical redundancy is necessary. In fact, if you have 150 people that collectively control over 51% of those who vote through proxy and something happens to the network, those people can meet in a mumble session, they can discuss what to do, they can produce a new block that's signed by 51% of the voting stakeholders [and that block can] appoint new witnesses [and] then the network can continue. I would really, really like to see a robust set of proxy positions. Of people who decide to take the responsibility of vetting all the people in the technical positions. We should have as many of them as we want. Of course, this is maximally decentralized, everyone can vote with their own stake or vote in a pool [via proxy]. Like mining solo or in a pool with Bitcoin. I think that's what we want. We want more than 5 or 6 pools, we probably want 100 pools.
Fuzzy: These pools would have different dynamics because instead of mining it's voting. So these voting stakes can change quickly. Whereas mining pools don't.
Bytemaster: People say, "If all the mining pools get shutdown, someone else will just start one up." But the time it takes to start a new mining pool is much longer than it takes to point your vote at a new proxy. Mining pools and mining have costs associated with [them], the reason mining pools don't work and are ultimately insecure is, if you shut them all down, and it's not profitable to solo mine, you need a mining pool to be profitable. With DPOS and voting, it's just as profitable to vote solo as it is to vote through a proxy. There's no extra overhead or cost associated with solo voting. This means you can have 100 proxies and not have to worry about profitability concerns. But if you had 100 mining pools, each mining pool would have a very high variance and that would impact profitability.
Fuzzy: Deludo asks, "Does it make sense to pay proxies? Is there going to be [such] a functionality or do you foresee a need for it?"
Bytemaster: I don't think it makes sense to pay them, since they have financial interest in the system and they volunteered to do it. Generally speaking, it doesn't take a whole lot of time – they already have to vote anyway, if they want to vote their own stake; so, just allowing other people to follow them makes good sense.
Crypto: Thomas asks, "Is there a way to make your voting records public if you were to say, I would like the job of being a proxy, I'm a member of the community who pays attention. Would there be a way for everyone to verify every time that you voted this is who you voted for.
Bytemaster: It's on a blockchain, all votes are public and all stake is public.
Fuzzy: Unless it's the blinded stake, but then the voting doesn't matter, correct?
Bytemaster: Unless you're using confidential transactions, in which case you're not voting.
Crypto: Thanks.
Fuzzy: Collateral bid idea: From what I understand it's just the witnesses that put in the highest collateral.
Bytemaster: The idea that's on the table is: If you want to become a witness you post collateral and anyone who doesn't vote otherwise, votes, by default, for the witnesses with the highest collateral. The danger there is due to voter apathy. That highest collateral is going to win. This means you end up with a system that's more similar to how Peercoin or NXT operate. With the proactive voters being the backup plan and having to override all the defaults. It, more or less, means that the system will be ruled by the wealthy rather than ruled by the proactive consensus. I think it's a decent idea by way for filtering people. And it's entirely possible to put money into a vesting account balance which basically is your commitment to the network that you're not going to withdraw your funds for the next six months. If someone elects you, they know you're pre-committed. You get voted in based upon your commitment. That's a perfectly legitimate way of campaigning. The only reason for someone to do that is if the financial incentive for being a witness is high enough to justify locking up their funds in order to get the job. Which means they're probability going to do a calculation of, "Alright, how much is it going to cost me to run a node? How much time am I going to have to put in there? And how much capital will I have to tie up?" End result being, if you require people to tie up capital, you're going to have to pay them more to justify the interest rate on that capital that is factored into their pay. So you add a cost to being a witness, it doesn't necessarily give you any additional security because the people voting for them should already be vetting.
We have a lot of witnesses right now that are very technically competent and very honest but who don't have a lot of money. Most whales don't want to run a witness. The assumption that those with money want to do the dirty work of running a witness is a fallacy. A mistake made by a lot of the other proof-of-stake coins. That's the beauty of delegated proof of stake. You can have a wealthy person back you with their vote and then you can do the job. Getting someone to vote for you is putting something [forth] as collateral, the only difference is you don't have anything to lose other than the vote and your income stream and your reputation. I think people undervalue reputation and the importance of it. If you elect people that actually value their reputation and have a career and a public face – they won't be able to do future business if they harm the network and earn a bad rap. That reputation is on the line when they do this job and it's going to follow them around the rest of their life. That is worth far more than any collateral you could ask them to put up.
One last question from Tuck, "What's the difference between a bridge function and atomic cross-chain transactions?" A bridge means that there is a moment in time in which the bridge could rip you off. [With] atomic cross-chain trading there is no moment at which you can get ripped off. This is sort of getting back to the, well, "How secure do you need to be?" The probability of any particular exchange getting hacked or going down within a given minute is very, very small. But over the course of a year it's pretty high. The reason I think atomic cross-chain transactions are overdoing it is because it's looking at the risk-reward and making something very complicated and difficult to use to reduce that last fraction of a probability that the party that you're using for the bridge is going to turn corrupt and steal your money [during] that fraction of a second [while] you're trusting them.
With a bridge, you send them the money and they send you something else. There's no outstanding debt, it's a real quick transaction. It's sort of like the time between you handing the cashier your dollar and them handing you the drink. There's a moment in time where you don't have the dollar yet still don't have the drink, but are you worried about them stealing from you during that moment of time? No. But if you mailed someone cash and it took a day, risks are higher. That's why I think bridges are a better value than atomic cross-chain transactions, because atomic cross-chain transactions have a very high cost to reduce a very small risk. 90% of the risk is mitigated simply be reducing your period of exposure to minutes rather than hours or days or months.
Fuzzy: Deludo asks, "According to Toast, virtualized smart contracts can be almost as fast as natively implemented ones. What of transaction throughput, settlement speed or cost are affected by the virtualized versus native way of providing smart contract?"
Bytemaster: I can boil it down to once thing. Go to any language shootout and ask are just-in-time compiled languages faster or slower than native languages, like C++? In the vast majority of cases native will be faster, but there are some corner cases in which the virtual, just-in-time compiled code can be faster. The bottom line is, from a technology perspective, you can go with a virtualized approach, if your virtual machine is designed with just-in-time compilation in mind.
The challenge with all of these systems is to make it deterministic and to make sure that you can meter the costs. It's the metering of the cost that slows down the visualization approach. Even if you do just-in-time compiling, you still have to count the instructions, you still have to count your time. It might be possible to do some really advanced techniques with preemptive interruption, so you just let it run for a millisecond and then interrupt it. If it's not done, you can discard it, you don't care about operations. There're lots of advanced techniques that can be put into the virtualized stuff. But the money and time and complexity involved in building those systems and then ensuring that they are deterministic in their behavior and bug-free is a very high barrier to entry. What that means is today's [metered] virtualized systems have very slow performance because they need to be very methodical and do a lot of extra operations. They're not just doing just-in-time compilation.
With the system like we have in BitShares where all changes are basically approved; it's not just that we have it compiled, it's that we have a process for reviewing every single piece of code. We can analyze the algorithmic complexity in advance and we can estimate the costs of it through benchmarking and set the fees accordingly. If you go to a completely generic system where anyone can submit [and run] code, you have to automate the process of analyzing the algorithmic complexity, of setting the fees and making sure that nothing bad happens as a result. That's where most of the complexity is. That's where most of the risk is. It's very much like the Apple app store: They look at all the apps and require a certain level of quality before they get on the chain versus allowing anyone to put an infinite loop on the chain or something on the chain that has bugs in it. Sure, you might pay for it with gas, but you have to pay the costs of tracking gas consumption and doing the metering. My short answer is, in theory, just-in-time compiled can just as fast as native, but there's extra overhead associated with metering and securing these systems that slows them down.
Someone: [Summarization of above: Metering is keeping track of resources uses so that people cannot use more than they've paid for. Determinism requires that any given input will always result in the same output. What would be the outcome of allowing indeterminism?
Bytemaster: It'd be like a Bitcoin hardfork. It's an unplanned for split in the network based upon which nodes went which way. If you have nondeterministic code in a contract then the nodes that go one way will be on one fork and the nodes that went the other way will be on a different one. If you start combing lots and lots of things you might even shatter it such that there're 100 forks. That's the catastrophic failure that results from not having a deterministic means of validating smart contracts.
Someone: You're saying that creating a system that prevents this indeterminism is difficult?
Bytemaster: Yes. The reason we don't use floating point in blockchains is because it's not deterministic behavior. Even with just one machine involved. When you create a virtual machine you're defining everything in terms of integer operations and if statements which we know will be deterministically evaluated. The more complexity you put into the system, something as stupid as an uninitialized variable, that is 99.99% of the time zero, but sometimes not can cause a break in consensus. My point here is that complexity creates more opportunities for nondeterminism. That is the challenge with it. It's not impossible to create a deterministic, just-in-time compiled, highly-performant, metered language, it's just very difficult, time-consuming, and you really don't know if you've got it right.