Author

Topic: How Bytemaster Thinks - The Rationale Behind BitShares (Read 506 times)

legendary
Activity: 3976
Merit: 1421
Life, Love and Laughter...
Ah ok.  I'm an avid fan of trading first and foremost, and crypto-tard second.  So I guess I'm not worthy...  And I'm really lazy.

But good luck to you guys.  It's good that different groups are pushing the tech forward, trying to outdo each other.
hero member
Activity: 504
Merit: 504
Nope it's TLDR because we only want the truly worthy to discover what's in it.

Smiley

legendary
Activity: 3976
Merit: 1421
Life, Love and Laughter...
^ True.  And it's TLDR.  Mind to post any cliff notes?
hero member
Activity: 504
Merit: 504
Reserved
hero member
Activity: 504
Merit: 504
PART 2

Fuzzy: Now that the witnesses are not connected to an individual who needs to campaign for any specific reason – it's now a purely technical role, neutral in politic. It seems to be that there's no need to for us to worry about anonymous witnesses. Is this the case? Would anonymous witnesses, like someone using a VPN, would that be more beneficial or would there be downsides?

Bytemaster: I think it's beneficial to have one or two anonymous witnesses. Not enough that they could collude to be a danger but enough so that there's at least somebody who's still an elected witness who isn't taken out with all the other raids. If that person were able to produce enough blocks to recover the blockchain in a timely manner. Versus having everyone go out, and then, "Who's in charge?" right? [With one or two anonymous witnesses] we don't need to have an entirely new election before we begin [again, after such an attack].

One or two is probably a good idea but other than that [witnesses] should probably be well-known. And this is another issue I'd like to bring up. People say, "If they're publicly known they can be denial-of-service attacked." Just because the person behind the server is publicly known doesn't mean the server or its IP address if publicly known of even directly on the network. Just because a witness is public doesn't mean the server is public. Just because you can take out an individual that was elected doesn't mean that you can take out their server. It's entirely possible for someone arrange to have their server set up through several other people, with no ties to them directly, but for which they are responsible and control. In that particular instance, even if the government raided them, shot them – they still wouldn't know where the server was to shut down the witness. That type of thing is entirely possible and we need to think about those types of solutions versus the naive approach of, "Let's just add more witnesses because that will make it more robust."

At the end of the day, more witnesses means less voter attention on each witness, lower quality per witness and it comes down to a basic engineering problem: Do you have 100 unreliable parts or 3 very reliable parts? The probability of failure is based on how those parts all combine. There's also ability to coordinate the speed at which you respond. I bring up all these points, because from a technical perspective, 17 witnesses is more than enough redundancy to protect against technical failure and it's probably sufficient redundancy to protect against government attacks if the witnesses are located in a handful of different jurisdictions.

The cost versus risk needs to be measured. People are very, very bad at estimating probabilities. People buy lottery ticks on the mistaken belief that the probability of winning is greater than it actually is. People avoid flying, and yet drive, because they think that driving in inherently safe compared to flying when we know that the probabilities of all these things are opposite [our intuitions]. We underestimate the extreme cases and overestimate the lower ones. If we keep those types of things in mind, it explains a lot of the irrationality in the perception of people regarding the risks and the costs. An example is insulation in your home, if you have no insulation you have a very inefficient home, you lose a lot of heat. Put in the first little bit of insulation and it makes a huge difference. But eventually you can spend $1,000,000 adding insulation to your home and it makes no difference whatsoever in the ability of your home to retain heat. There's the same thing with security, eventually there's this point of diminishing returns, where you're adding cost yet getting no benefit. It gets more and more expensive for less and less benefit. That's what we need to keep in mind in all aspects of the system.

As I say these things, I am not arguing for centralization. I want a robust system that's going to serve the purpose of securing life, liberty and property and not be unnecessarily burdened. That's where I'm coming from. That's what I'm trying to achieve. I hope that I'm not losing people who are big fans of decentralization. I am a huge fan of it. But decentralization is a technique, and I don't want to get lost in a technique. I want to stay focused on the why, the goal. Are we achieving the goal? I think that's what we're trying to do with BitShares, and that's what sets BitShares apart from a lot of other systems.

Fuzzy: During my IT courses we'd talk in terms of project management and IT security, you're basically talking about the risk assessment matrices talked about in class to IT students. You have to find that fine line. You can always overdo security. The question is, "Is it worth it?" There might be some instances where, yes, the benefits outweigh the costs but others where the opposite is true.

Another question, "What are the best countries in the world for liberty and network speed?" I don't know if you've done any research on this.

Bytemaster: I'm too swamped with technical stuff to do research into all the political stuff. I'm kind of stuck in the United States. I hope someone else will do that research.

I would like to bring up another point: Just because you have a product that meets all the technical specifications and [makes] all the proper risk-reward trade-offs to maximize the value of the system, doesn't mean that it'll necessarily be the best-selling thing. This is what got interesting about the debate. Why do people buy a car with 300 horsepower when the speed limit's and reckless driving laws mean that you can get by with a car with a 120 horsepower engine with better gas mileage. [That] type of irrational feel-good value is something we should contemplate. That impacts how well we market something. A lot of companies do things with their products that have no technical [or functional] benefit, but they cause the product to sell better. An example of this is in the 50's and 60's magazines that had inappropriate material on their covers would be wrapped in paper. Some companies realized they could sell more of a legitimate magazine that didn't have that type of stuff if they wrapped their own magazines in similar paper. The paper wasn't providing any purpose other than [making it seem] like it was forbidden, therefore it drove interest and drove sales. There are other situations where companies do stuff that even has a negative impact on performance simply because it sells better.

I don't know the answers to these questions. I think it's a market-research thing. We all have ideas about what would sell better to us, but we have personal biases making it hard to tell what will sell best to the masses and to different target audiences. We know who the loud [and] vocal people are, but do they actually carry any weight? Or do fundamentals matter, like profitability. If you can make the blockchain profitable, is that more enticing than saying, "Well, we're not profitable, but we're super secure." Those are the types of [conversations] we need to have.

I mention all this because the people here on this call are going to be the voters. They're going to have to vote on who to hire as witnesses and committee members and as workers. These are things that you need to think about and consider. My job in these mumble sessions is to help provide perspective and help educate so we can all make better decisions and not just vote out of gut-reflex. The more educated the voters are the better the system will be.

This brings me to another point, we've got a fourth role in the system that hasn't really been talked about much because it's not an explicitly enumerated role. We've got the witnesses, the committee members and the workers – we've talked about those [roles] lots. We also have the proxy voters. In political terms you'd probably call them 'delegates'. These are the people to whom most people have set their account to proxy-vote through.

You can view these as the mining pools of Delegated Proof of Stake. We want as many of those people as possible. They can meet and make decisions. If we had 100 or 150 people that controlled 85% of the indirect vote, they would be able to quickly discuss policy and make smart choices about who all the other players in the system are. We can have as many of those as we want. We don't have to centralize on a handful of witnesses or committee members. Instead we can pick leaders of communities and businesses and break it up as much as we want to get as many votes concentrated into those hands as possible. And have those people decide how much technical redundancy is necessary. In fact, if you have 150 people that collectively control over 51% of those who vote through proxy and something happens to the network, those people can meet in a mumble session, they can discuss what to do, they can produce a new block that's signed by 51% of the voting stakeholders [and that block can] appoint new witnesses [and] then the network can continue. I would really, really like to see a robust set of proxy positions. Of people who decide to take the responsibility of vetting all the people in the technical positions. We should have as many of them as we want. Of course, this is maximally decentralized, everyone can vote with their own stake or vote in a pool [via proxy]. Like mining solo or in a pool with Bitcoin. I think that's what we want. We want more than 5 or 6 pools, we probably want 100 pools.

Fuzzy: These pools would have different dynamics because instead of mining it's voting. So these voting stakes can change quickly. Whereas mining pools don't.

Bytemaster: People say, "If all the mining pools get shutdown, someone else will just start one up." But the time it takes to start a new mining pool is much longer than it takes to point your vote at a new proxy. Mining pools and mining have costs associated with [them], the reason mining pools don't work and are ultimately insecure is, if you shut them all down, and it's not profitable to solo mine, you need a mining pool to be profitable. With DPOS and voting, it's just as profitable to vote solo as it is to vote through a proxy. There's no extra overhead or cost associated with solo voting. This means you can have 100 proxies and not have to worry about profitability concerns. But if you had 100 mining pools, each mining pool would have a very high variance and that would impact profitability.

Fuzzy: Deludo asks, "Does it make sense to pay proxies? Is there going to be [such] a functionality or do you foresee a need for it?"

Bytemaster: I don't think it makes sense to pay them, since they have financial interest in the system and they volunteered to do it. Generally speaking, it doesn't take a whole lot of time – they already have to vote anyway, if they want to vote their own stake; so, just allowing other people to follow them makes good sense.

Crypto: Thomas asks, "Is there a way to make your voting records public if you were to say, I would like the job of being a proxy, I'm a member of the community who pays attention. Would there be a way for everyone to verify every time that you voted this is who you voted for.

Bytemaster: It's on a blockchain, all votes are public and all stake is public.

Fuzzy: Unless it's the blinded stake, but then the voting doesn't matter, correct?

Bytemaster: Unless you're using confidential transactions, in which case you're not voting.

Crypto: Thanks.

Fuzzy: Collateral bid idea: From what I understand it's just the witnesses that put in the highest collateral.

Bytemaster: The idea that's on the table is: If you want to become a witness you post collateral and anyone who doesn't vote otherwise, votes, by default, for the witnesses with the highest collateral. The danger there is due to voter apathy. That highest collateral is going to win. This means you end up with a system that's more similar to how Peercoin or NXT operate. With the proactive voters being the backup plan and having to override all the defaults. It, more or less, means that the system will be ruled by the wealthy rather than ruled by the proactive consensus. I think it's a decent idea by way for filtering people. And it's entirely possible to put money into a vesting account balance which basically is your commitment to the network that you're not going to withdraw your funds for the next six months. If someone elects you, they know you're pre-committed. You get voted in based upon your commitment. That's a perfectly legitimate way of campaigning. The only reason for someone to do that is if the financial incentive for being a witness is high enough to justify locking up their funds in order to get the job. Which means they're probability going to do a calculation of, "Alright, how much is it going to cost me to run a node? How much time am I going to have to put in there? And how much capital will I have to tie up?" End result being, if you require people to tie up capital, you're going to have to pay them more to justify the interest rate on that capital that is factored into their pay. So you add a cost to being a witness, it doesn't necessarily give you any additional security because the people voting for them should already be vetting.

We have a lot of witnesses right now that are very technically competent and very honest but who don't have a lot of money. Most whales don't want to run a witness. The assumption that those with money want to do the dirty work of running a witness is a fallacy. A mistake made by a lot of the other proof-of-stake coins. That's the beauty of delegated proof of stake. You can have a wealthy person back you with their vote and then you can do the job. Getting someone to vote for you is putting something [forth] as collateral, the only difference is you don't have anything to lose other than the vote and your income stream and your reputation. I think people undervalue reputation and the importance of it. If you elect people that actually value their reputation and have a career and a public face – they won't be able to do future business if they harm the network and earn a bad rap. That reputation is on the line when they do this job and it's going to follow them around the rest of their life. That is worth far more than any collateral you could ask them to put up.

One last question from Tuck, "What's the difference between a bridge function and atomic cross-chain transactions?" A bridge means that there is a moment in time in which the bridge could rip you off. [With] atomic cross-chain trading there is no moment at which you can get ripped off. This is sort of getting back to the, well, "How secure do you need to be?" The probability of any particular exchange getting hacked or going down within a given minute is very, very small. But over the course of a year it's pretty high. The reason I think atomic cross-chain transactions are overdoing it is because it's looking at the risk-reward and making something very complicated and difficult to use to reduce that last fraction of a probability that the party that you're using for the bridge is going to turn corrupt and steal your money [during] that fraction of a second [while] you're trusting them.

With a bridge, you send them the money and they send you something else. There's no outstanding debt, it's a real quick transaction. It's sort of like the time between you handing the cashier your dollar and them handing you the drink. There's a moment in time where you don't have the dollar yet still don't have the drink, but are you worried about them stealing from you during that moment of time? No. But if you mailed someone cash and it took a day, risks are higher. That's why I think bridges are a better value than atomic cross-chain transactions, because atomic cross-chain transactions have a very high cost to reduce a very small risk. 90% of the risk is mitigated simply be reducing your period of exposure to minutes rather than hours or days or months.

Fuzzy: Deludo asks, "According to Toast, virtualized smart contracts can be almost as fast as natively implemented ones. What of transaction throughput, settlement speed or cost are affected by the virtualized versus native way of providing smart contract?"

Bytemaster: I can boil it down to once thing. Go to any language shootout and ask are just-in-time compiled languages faster or slower than native languages, like C++? In the vast majority of cases native will be faster, but there are some corner cases in which the virtual, just-in-time compiled code can be faster. The bottom line is, from a technology perspective, you can go with a virtualized approach, if your virtual machine is designed with just-in-time compilation in mind.

The challenge with all of these systems is to make it deterministic and to make sure that you can meter the costs. It's the metering of the cost that slows down the visualization approach. Even if you do just-in-time compiling, you still have to count the instructions, you still have to count your time. It might be possible to do some really advanced techniques with preemptive interruption, so you just let it run for a millisecond and then interrupt it.  If it's not done, you can discard it, you don't care about operations. There're lots of advanced techniques that can be put into the virtualized stuff. But the money and time and complexity involved in building those systems and then ensuring that they are deterministic in their behavior and bug-free is a very high barrier to entry. What that means is today's [metered] virtualized systems have very slow performance because they need to be very methodical and do a lot of extra operations. They're not just doing just-in-time compilation.

With the system like we have in BitShares where all changes are basically approved; it's not just that we have it compiled, it's that we have a process for reviewing every single piece of code. We can analyze the algorithmic complexity in advance and we can estimate the costs of it through benchmarking and set the fees accordingly. If you go to a completely generic system where anyone can submit [and run] code, you have to automate the process of analyzing the algorithmic complexity, of setting the fees and making sure that nothing bad happens as a result. That's where most of the complexity is. That's where most of the risk is. It's very much like the Apple app store: They look at all the apps and require a certain level of quality before they get on the chain versus allowing anyone to put an infinite loop on the chain or something on the chain that has bugs in it. Sure, you might pay for it with gas, but you have to pay the costs of tracking gas consumption and doing the metering. My short answer is, in theory, just-in-time compiled can just as fast as native, but there's extra overhead associated with metering and securing these systems that slows them down.

Someone: [Summarization of above: Metering is keeping track of resources uses so that people cannot use more than they've paid for. Determinism requires that any given input will always result in the same output. What would be the outcome of allowing indeterminism?

Bytemaster: It'd be like a Bitcoin hardfork. It's an unplanned for split in the network based upon which nodes went which way. If you have nondeterministic code in a contract then the nodes that go one way will be on one fork and the nodes that went the other way will be on a different one. If you start combing lots and lots of things you might even shatter it such that there're 100 forks. That's the catastrophic failure that results from not having a deterministic means of validating smart contracts.

Someone: You're saying that creating a system that prevents this indeterminism is difficult?

Bytemaster: Yes. The reason we don't use floating point in blockchains is because it's not deterministic behavior. Even with just one machine involved. When you create a virtual machine you're defining everything in terms of integer operations and if statements which we know will be deterministically evaluated. The more complexity you put into the system, something as stupid as an uninitialized variable, that is 99.99% of the time zero, but sometimes not can cause a break in consensus. My point here is that complexity creates more opportunities for nondeterminism. That is the challenge with it. It's not impossible to create a deterministic, just-in-time compiled, highly-performant, metered language, it's just very difficult, time-consuming, and you really don't know if you've got it right.
hero member
Activity: 504
Merit: 504
Thisisausername:  Hey all, I haven't been able to make the past few hangouts and couldn't find any transcript of the September 25th one.  So, here you go; lightly edited for clarity. Non-bytemaster content slightly more heavily edited.  []'s indicate editor notes.



Fuzzy: Intro

Bytemaster: It's been another week; another significant step forward in the life of BitShares 2.0 as we march towards releasing on October 13th. For those of you who were here last week we just started a new, and hopefully final, testnet. I'd like to report on how that testnet has gone this week.

So far we have 33 witnesses who have been voted in and we have a total of 100% witness participation, which is actually better than the current BitShares network which has 96% participation. I'd like to thank all of you testers out there who have helped set up nodes. These are 33 unique servers that have managed to stay in sync, despite all the spamming and even attempts at double signing blocks were done this week – just trying to mess the network up. We survived the double block signing without a single missed block. With all the fixes that we put in last week we were able to boost the transaction throughput. The testers were able to achieve several blocks with a couple hundred transactions in them. These are three second blocks. We're doing really well as far as throughput goes, far more than a real network will ever need to process in the short-term. I am very happy with the results of this test network and am feeling very good about upgrading on October 13th. If any of you had doubts due to bugs or the problems we've had with the networking code in the past month, those issues appear to have been resolved. We have a relatively stable blockchain, at least as stable as BitShares [0.9.3]. My full witness nodes have basically been running on their own without issue for some days now. The general takeaway from all of this is we're on track for October 13th, the network is extremely stable and that leaves the long pole[?] in the tent: the user interface.

I'd like to give some updates on the user interface. We have a full node downloadable GUI that some of you were able to test. It hasn't been updated with all the changes since earlier in the week but we have the build process and infrastructure in place [such] that we will be putting out another release of a full node graphical wallet that seems to work pretty well. We also are planning a light wallet that you can download that's similar to the full node but instead of connecting to a local witness, it connects to a remote witness. That's sort of an Electrum model. The [light wallet] uses the exact same interface as the website and the full node. One interface, three different ways that you can use it with different levels of security consideration.

Fuzzy: The question I would ask, if you don't mind, for the downloadable light-client: what's the downside to that in terms of security? This seems like a common concern.

Bytemaster: From a security point of view, you are not fetching new, mutable JavaScript from a remote server. That's the biggest improvement to security of the light-node.  You're still trusting the server to accurately report the state of the blockchain to you.  The worst it could do is lie to you, but there's actually no incentive for them to lie.  You can use the exact same server as the hosted wallet or whatnot.  Even if they did lie to you it's entirely possible to construct transactions that go to who you want them to go to or no one else. It's really just a matter of whether or not you trust a remote node. You can pick and choose different remote nodes for use with your wallet and that will give you the middle-ground. If you don't trust anyone and want to run your own node, that's the most trustworthy [and secure]. Allowing someone else to run the node and you just run your own GUI, that's the middle ground.  Using a hosted wallet is the least secure option [requiring the most trust of third parties,] but it's not so bad assuming the server doesn't get hacked and its JavaScript changed to try to steal keys. If the server were hacked you're only vulnerable if you visit the server and log in while it's compromised. Only active users during the time of the attack are [vulnerable]. {Sound cuts out here.} Of the three, I'd say your biggest risk with using a hosted web wallet is that if you clear your browser cache your wallet gets deleted. If you don't have a backup of your wallet and you clear your cache you're SOL, [shit out of luck]. That's one of the big motivators for having the light version and the full desktop version. To make sure that you can clear your bowser cache without risking your wallet. It seems there are a lot of people who recommended clearing the browser cache suggesting everything will be fine. This isn't safe if you have $100,000 worth of BitShares floating around in your wallet. It'd be a very sad day.

Fuzzy: When you first set up your wallet and it asks if you want a BitShares brainkey, say yes, write it down, don't lose it, make a copy of it.

Bytemaster: Technically yes. Although brainkeys aren't able to recover all the keys your wallet might have in it. If you import keys to a wallet from BitShares 0.9.3 or you change your brainkey – you might have more than one brainkey over time – we've concluded that a brainkey is not a general purpose approach. It only works for the basic case where you have a new account and you never change your brainkey and you never import keys. We didn't want to design a user interface around that assumption. It's not a safe assumption. We want it to be safe for all users. We will have a brainkey and you'll be able to write it down and [using it] you'll be able to recover keys. But that's not going to part of the regular work-flow. It's going to be more like a condensed, future-proof backup. All new keys that you generate in your wallet will be derived from the brainkey. Which means your existing backups are good and if you have your brainkey then you can recover those keys. The process that we've set up will require you to save a file and keep that file secure and backed-up. We're working on coming up with more automated backup solutions but for now, it'll be your responsibility to backup your wallet and all your keys to a file on disk so it can be imported later. Do not rely on your browser cache.

Fuzzy: Joey asks: "Would there be a code or paper wallet functionality that could help with that problem that you can foresee?"

Bytemaster: Older paper wallet functionality is just a matter of generating a public key and private key offline and transferring the public key to a wallet. If you configure the permissions on an account to a public key where the private key is kept cold, never has been on a [ed. networked] computer, then you have cold storage. We don't have any tools in place to generate those keys easily for you. You'd have to use one of the command-line tools offline.

Fuzzy: But they are available for somebody who wants to do that?

Bytemaster: It's easy to create the tools [ed. probably just simple wrappers around already existing API calls,] we just haven't prioritized it.

Bytemaster: Othername asks, "Why doesn't remembering your password for the web wallet suffice in case the cache is deleted?" The reason is the password never goes to the server and the server doesn't store your wallet. Your wallet is also kept locally [in the browser cache] and you're never authenticated to the server.

Future versions of the web wallet might have server-side storage and backup of your wallet file for you. In which case your wallet file would be stored on the server encrypted, meaning the server can't read your keys and they never get your password. They can give you your [wallet] file back to you thus you can restore from the server. This would be a good way to move [wallets] between devices automatically, though it would require server-side infrastructure and we haven't been focused on server-side infrastructure at this point in time.

Othername[?]: Isn't such a web wallet, where if you deleted your cache all your funds are gone, a potential source of bad PR? Might a solution be to just release a local light-client [and full-client]?

Bytemaster: Yes, there's risk there associated with that particular issue. It's not really a problem once you've done a backup. Then you don't have to worry about  your cache being cleared anymore. The benefits of the hosted wallet are that you get free, automatic upgrades as we improve things. Whereas you have to download new versions of the other wallets each time. Adding server-side storage to make the hosted-wallet as reliable as possible, to allow it to function even if you clear your cache, is a desired feature for the future.

Othername[?]: DataSecurityNode just mentioned, "What about an initial forced backup?"

Bytemaster: Our plan is to have a notification on the user-interface that indicates if a backup is required and how long it's been since you last backed-up. It's not there now but we've been engineering the data tracking into the wallet. So we can display a big red warning with a button to backup now.  On every page. Until you do it.

Fuzzy: Every user should be backing things up and we should have an easy process for them to do it.

Someone: We're going to want to educate people about this. Eventually it'll be taught in schools, "You've got to be responsible and that means backing-up your cryptocurrency files."

Bytemaster: I think in the future, when cryptocurrency is successful, it's all going to be managed automatically behind the scenes. You'll be able to recover your password and your cryptocurrency funds with similar difficulty to resetting your password on an existing banking system. Regular people out there are not going to magically change and learn how to do all this stuff. We're still at the early-adopter phase. During the early-adopter phase, yes, we can expect people to learn that stuff. Long-term all of the stuff's going to have to be managed because the risk of a hard-disk failure, network failure, forgetting your password is too great.

It's a greater security risk with cryptocurrency than exists in the current banking system. The probability of you losing your money in the bank is far less than the probability of losing your money with cryptocurrency even though, technically, someone can steal your money or freeze your accounts, but guess what? You forget your password, you lose your wallet, your computer dies: all those things can cause you to lose your funds. The only difference with cryptocurrency is that it's somewhat in your control whereas with the banking system it's not in your control. For the average person out there, their ability to control and be responsible actually means that a cryptocurrency is less secure for them, because they're not able to be responsible. We need to create products and services that cause the average person – who knows themselves well enough to know that they're going to forget their password, they're going to do something stupid with their computer, they're going to misplace their backup – [to be confident that their funds will be safe.]  Very smart people make those types of mistakes and need those types of services. People don't want to think about their money. They just want it to be there and they want to use it and they want to know that they can always get to it. We need to migrate to systems that are that easy to use, that easy to recover, that automatic. Where you're never at risk of getting locked out of your account. I think most people would choose to have the risk of their funds being stolen over the risk of being locked out for doing something stupid.

Fuzzy: I've
hero member
Activity: 504
Merit: 504
We got lucky last Friday when Bytemaster went into an extended riff on the reasoning behind a whole lot of the thinking behind BitShares.

Here's the whole delightful transcript of his global press conference (aka hangout) which is even more edifying than usual.

Many thanks to thisisausername for the effort to produce this transcript from the recording at beyondbitcoin.org.
Original Post

WARNING:  Only open minded people should bother to read this impromptu magnum opus.  You are sure to have your horizons expanded.  
If you already are committed to the groupthink of some other cryptotribe, no worries.  We're from an alternate universe.


Smiley

Jump to: