TRANSCRIPT:
ADAM B. LEVINE OF LET’S TALK BITCOIN INTERVIEWS BITSHARES CREATOR DAN LARIMER
October 31, 2015
https://letstalkbitcoin.com/blog/post/lets-talk-bitcoin-260-new-growthLTB: ...on the 13th (of October, 2015), there was a fairly large transition from the bitshares 1.0 project to the bitshares 2.0 project. Why don’t we… remind people what bitshares 1.0 was and how we got from protoshares all the way to here….
DL: Bitshares is a project that aims to use blockchain technology to organize people to solve problems like a decentralized exchange. The creation of a decentralized exchange was the original purpose behind bitshares. It’s had a long life through many transitions dating all the way back to 2013 when the idea was originally conceived. We’ve been through several different iterations: things like protoshares. ...Since then it’s really evolved through several iterations. As we adapt to the market and adapt to lessons learned. And we’ve made a lot of mistakes along the way, but we’ve discovered a lot of things and we’ve done a lot of innovations at the same time.
LTB: So the ... recent upgrade basically took bitshares off of the original codebase and put it on an entirely different one… called graphene. Why … abandon the earlier code base, and what advantages are you getting?
DL: Graphene is the library or the toolkit that we are using behind bitshares. In the process of developing the original version of bitshares, we (were) under a lot of time pressure to built it… quickly… to get it on the market, to try it out. But it just wasn’t a sustainable foundation upon which we could scale and add new features. In the desire to accelerate future development, and have the architecture that could expand in scale to the hopeful adoption, we had to re-architect things. So graphene is designed from the ground up with a whole lot of lessons learned, a whole new permissions system that makes multi-signature transactions and accounts and user interaction a lot easier. It’s designed with performance in mind. There’s … a host of things we’ve done to improve it, including the very consensus model behind delegated proof of stake. We’ve enhanced that to make more effective community consensus over complex issues.
LTB: I have always been interested in the features that you guys are putting in, and the ones you just described are perfect examples of that, but the problems I have always had with bitshares have surrounded basic kinds of usability. From what I can tell, there still seem to be usability problems. So can you talk about that ...and how do you decide what to focus on making sure that this is as easy to use as possible vs. how much is about making sure that you have these awesome features....
DL: Ease of use has many different aspects. You’ve got the actual user interface. But the interface often reflects the underlying technology. For example, with bitcoin, the bitcoin addresses and how you send those from person to person, it ends up showing up in almost all almost all user interactions from all bitcoin wallets. So we try to design fundamental features at the low level that allows ease of use at the high level. So our multisignature approach gathers all the signatures on the blockchain and that makes multisignature easier to use then … when you have to assign the transactions off line. So all the features we add the low level are actually done with the thought of how to make the higher level interface easier to use. And that’s one of the challenges, if you design a lower level protocol that is super generic and flexible, and completely scriptable, you still have to represent what that lower level protocol is doing to the user somehow. And we wanted to make the blockchain reflect the user’s actions as closely as possible: have the transactions be something you could almost understand just by looking at them, rather than complex scripts where you have to have a computer to evaluate and interpret.
LTB: So bitshares is a chain that uses the graphene library. There are other chains that will use the graphene library. Is this a change of … approach? I think I remember early on that the … bitshares chain would be the home to all these other things and perhaps there might be sidechains in the future.
DL: Let’s talk about the scaling issue in particular…. there are different aspects to scaling. This is … very important for the entire industry to contemplate. There are several dimensions. One dimension is throughput. Another dimension is latency. Then there is a third dimension of political scalability. You can have a really fast internet connection, but if you are trying to connect to a website hosted on Mars, it’s going to feel really slow because the time between when you issue the command and the response gets back to you is so long. So bitcoin has a 10 minute latency on average before you get your first confirmation. Worse case is even longer than that… it sometimes could be 20 to 40 minutes between blocks. The throughput is how many transactions you can actually get in that time. So in 10 minutes, or 40 minutes, bitcoin can get an average of 7 transactions per second into a block. You can make your blocks bigger and that will increase your throughput, but it will do nothing to improve the latency of the system. And then there is political scalablity, and that has to do with how many people can be involved in the decision-making process. How many people have influence? How does that scale from small community projects to something as big as a nation or something that goes global. The political scalability of the system is the third dimension.
With bitshares and graphene, we’ve done a lot of focus on the latency aspect of this. Latency is very important for advanced smart contracts. It’s incredibly important for markets, where if you issue a market order, you want to know as soon as possible did that execute or not so you can go do your arbitrage at some other exchange or trade in some other asset. But if you are on a blockchain based upon proof of work where there is this uncertain period where you don’t know whether it’s confirmed or not, it slows down the ability to sequence events and know that your previous actions are locked in and irreversible. And so you can think of it as the difference between how many cores you have on a processor and the frequency of the processor.
LTB: And so when you say the frequency of the processor, that means the speed, basically.
DL: Clock speed, yes.
LTB: One of the things about bitshares is blockchain usernames. You talked about multisig being more flexible in the new type of architecture that you have. I think that that comes from deeply integrated usernames within the system itself. Is that correct?
DL: That’s part of it, yes.
LTB: Most blockchains don’t have integrated usernames. You guys do. What are the other advantages of usernames in the system and are there any downsides to using usernames in the system?
DL: The primary advantages are that it’s possible to communicate your username over the phone or in conversations, or write it down on a business card. It’s easier for you to remember your account name. Those are the pros. The downside is typos could be more frequent. They could cause you to lose funds that way.
LTB: So just because it’s human readable basically that means that humans aren’t going to be copy and pasting them which means you can have errors creeping in that way.
DL: Yes, bitcoin addresses, I can’t tell you how many people I’ve talked to and how many times this has happened to myself where copy and pasting, it looks like a bunch of noise and you think you got it right. So there’s pros and cons with both. Our feeling is that from a usability perspective, copying random strings or having to use QR codes is a barrier to entry to the masses. The masses are used to bank account names that they know. There might be a routing number under the hood of their bank account, but they log in with a username and password. That’s how people interact with websites. Another benefit of names: it give them personality. You’re going to label your private and public keys anyway, just so you can keep track of who you are paying.... It provides some consistency to that, which allows your ledger to be readable and consistent readable across all your computers. There are a lot of benefits associated with it. One of the downsides to the named account approach: potential loss of privacy. Your name is no longer random, so if you pick a name that is somehow associated with who you are, then people might be able to link that activity to you. All that said, it’s still possible to generate a random account name and use that and have more or less the same privacy you have on bitcoin. I would like to add something on the privacy note that bitshares 2 has. It’s not available in the web interface but it is available on the protocol and on the command line wallet for advanced users, and that is Stealth Blinded Transfers. This is a level of privacy that exceeds what is available in the other coins. It’s based on the technology created by blockstream. Both the amount being transferred and who it’s being transferred to is completely obscured and there are no account names in this system. So it’s the maximum level of privacy but it comes at the penalty of ease of use. You’ve got to do more coordination between the sender and the receiver to make sure that payment is received properly.
11:24
LTB: Both bitcoin and ethereum support various kinds of smart contracts, and bitcoin is adding more smart contracting capabilities all the time, where you can program transactions to have rich levels of interaction. Where does bitshares or the graphene library fall between those two things. Are you more on the fully capable you-can-do-anything side or is it more on the limited language but powerful tools side like bitcoin?
DL: … we’re more on the curated side. We solve the adding features concept with changing the protocol by making hard forks to add new features easier to implement, easier to get consensus on, which means we don’t have to go for the unchangeable protocol, but an upgradeable protocol. Our belief is that any smart contract that is going to be widely used needs to have a native implementation. It’s possible to do all kinds of things with these smart contracts and ethereum or bitcoin, but it’s like tying one hand behind your back... yes we technically we achieved the transfer of control and the behavior we wanted but it wasn’t in a natural way for the particular application. It wasn’t necessarily done in the most efficient way that could be implemented. I use the comparison between the app store on apple … and the google play store. The bitshares stakeholders approve all the smart contracts that get added to the protocol. That said, one of the things that could be added to the protocol is smart contract type features. We haven’t added it to this point because we believe that it’s sort of a long tail of innovation. But the big applications … should be done natively. 13:30
LTB: You talked about political scalability earlier. Bitshares 1.0 used DPOS or Delegated Proof of Stake. I believe that the graphene system uses a more advanced variation of that and has a greater segmentation of roles, with the idea that if you split up the responsibility and the ability to perform these roles amongst many people, none of whom have the ability to perform all the roles, then no one individual or group of individuals are responsible for any one thing that happens. And then you use the stake holders which is to say the people who actually possess the bitshares and then vote with those bitshares, they essentially give mandate to various representatives to act on their behalf. Is that a good description of the system that you designed and what did I miss? 14:15
DL: That’s a good high level overview. I’d say we take a constitutional style approach of separation of powers. We’ve got the witnesses which are responsible for witnessing transactions and time stamping them and putting them into blocks. We’ve got the committee members. They don’t produce blocks but they do get to change things like fees or network parameters such as block size. But everything that they do is peer reviewed by all individual users and goes through a waiting period where they can get voted out and (garbled) becomes void. So those are two roles. And then we have workers, and workers can get hired by the blockchain to do implementation of new features that the stakeholders want. So if they want a new smart contract or they want something that is going to make the blockchain more competitive, then the whole blockchain can fund it without having to worry about donations or some people having to sacrifice so that everyone else can benefit. It really divides up all those rolls.
And then lastly anyone can be a full node and can be a validator, can basically check up on the network to make sure that all the rules are being followed. There is no limit to the number of validators. There’s actually no limit to the number of block producers, other than the economic limits of how much is the network willing to pay and how quickly do you want to reach irreversible consensus. 15:50
I’ve done some analysis of these various dimensions of decentralization, of control of power.
From where I sit I think bitshares is doing the best in every dimension. We’ve got the most unique individuals actually producing blocks within any given period of time. If you take within a 30 second window how many people have participated in signing off on blocks, bitshares has the most. There is one other role that we have in the bitshares system and that’s called the proxy, and it’s like proxy voting in a corporation. What we discovered in bitshares 1 was that there’s a lot of voter apathy. Not everyone has time to follow all of the politics or evaluate all the people that are trying to do things, evaluate all the ideas. Not everyone wants to think about it or even knows they are able to make the best decision. To solve that you can simply set your account to proxy to someone else who you trust to make those decisions. Which means that you can view proxy voters as sort of like a congress of people where it’s representation proportional to the weight. And that can allow the network to respond very quickly to changing conditions. That’s one of the main things that we wanted to make bitshares to be able to do, was respond to dynamic market conditions. If the governments decide they want to shut down all the mining pools, how does bitcoin respond? How long does it take them to respond? If there’s a major issue that comes up with blocksize, how long does it take you to respond? That is the kind of thing we want the bitshares network to respond to quickly. Proxy or representative voting is how bitshares achieves that. 17:52
LTB: It seems like you’ve replicated … almost a political situation… you said the word politics and that is basically what we’re talking about here. You said that there’s more participation on an individual level. I assume you are talking about relative to mining pools not individuals who run mining and participate in a pool. Why does that matter and why is your system better? 18:12
DL: In bitshares we try to make sure that every individual’s influence is directly proportional to their stake. If you have one single bitshare you can either vote directly on who you want or you can give it to someone who can decide for you. You’ve got say proportional to how much you own. Whereas in systems like bitcoin, if you own a bitcoin you have no say. If you’re a bitcoin miner you can pick between one of the pools out there, but economic forces mean that only if you can mine profitably do you have a say. In other proof of stake systems the bell curve of the distribution of funds mean that the only people that can profitably virtually mine are those that have a large enough stake. Anyone with a smaller stake they technically can run a node and produce blocks whenever their turn comes up, but they won’t be able to do so at a profit, which means they don’t get a say either.... because of economic reasons. 19:30
Lastly the ability of people to participate in the system without having to be technically proficient, without knowing how to run a server or maintain hardware or have good uptime. With workers and proxies and committee members you don’t need any technical expertise. All you need is to understand the concepts, to understand economics to do whatever job you’re going to do for the blockchain. 19:50
One thing big we haven’t discussed is another role in the network and that is spreading the word, bringing in new users, and that’s why bitshares 2 has a referral program built into the protocol, which means when a new account is created, the network knows who referred them, and it knows how to divide up the fees of that referred user back to the people who referred them. You bring someone into bitcoin and the network makes fees, the miners make fees, but there’s no income for you. With bitshares you bring someone to the network and you get a cut of whatever revenue they generate for the network. And this gets to one of the hearts of bitshares which is to make sure that the network is profitable. Profitable in the sense that the cost of paying all the witnesses to run the nodes and the workers to do everything should be less than fees earned. The cost of user acquisition is an expense that all businesses have. Bitshares makes sure that the cost of user acquisition is factored into the overall equation. 20:53
LTB: You and I first spoke about bitshares just a couple of months after I started Let’s Talk Bitcoin. It really feels like you guys have fallen down on expectation management. I have been watching the project casually for the last year and more intensely the year before that. One of the things that I felt pretty consistently about the project is that you guys have really amazing technology … there is zero question in my mind that when I use bitshares it is much faster, and because it’s much faster and it confirms really quickly it works really well when it works, that it should have a larger user base than it does. But it doesn’t because I feel like you guys over promise and under deliver, and your technology is still better than other stuff out there but it doesn’t meet the goals that you have set for yourselves. And because of that it feels like it’s always in a perpetual state of well this is just a temporary thing, this is the new system that’s coming out because you tried to hit the goals and weren’t able to hit the goals. 24:05
I specifically bring this up in the context of a number I have heard tossed around a couple of times and I think it came officially from you guys but I am not sure, in that the system can support 100,000 transactions per second theoretically. Theoretically is one thing, but in practice I read an article that you wrote maybe a couple of months ago….that said that in practice you guys can only do about 100 TPS right now because of realities of peer to peer technology, which are things that I don’t think you’re working to change. I think it’s like you’re waiting for progress to push along and that that will make it so that your system can work at that full throughput. Do you see what I mean here? 100,000 transactions per second is fantastic compared to any system out there that maintains meaningful decentralization. Compared to 100,000 transactions per second it looks like a big failure. So you get the issue I’m bring up here? 25:00
DL: Sure, this is a question that comes up a lot. ...when we say 100,000 transactions per second we’re referring to the speed of the CPU. Even the fastest CPU is still limited by how fast the network is. So when Intel says they can do so many million floating point operations per second on their latest CPU, that’s only for applications that are highly optimized and under the assumption that you’re not getting all the data that you’re trying to feed the CPU from a network that’s really slow.
We identified that as a bottleneck. There are some tasks in blockchain technology that are inherently sequential. So we’re talking about the core frequency, and just like CPUs, that’s the one area of technology…. we can add more cores, the individual cores, single threaded performance isn’t growing as fast as multithreaded. So we optimized for what we identified as the bottleneck in the space. Everything else can be done in parallel: Signature verification, faster networks, faster memory, more memory. Those are all things that are out there.
So when we did our testing on a public test network with 30+ participants distributed around the world, we were actually able to achieve over 1000 transaction per second using low end VPS systems. These are DigitalOcean machines with just a couple gigs of RAM. So we’re doing 1000 transaction per second without even doing any of the optimizations that could be made at scale, such as direct fiber optic links between the nodes that have been elected, more efficient networking protocol to improve latency, GPS time synchronization to improve timing. There are a lot of optimizations that we know are possible that we’re not waiting for … technology to advance. As soon as the network is saturating 100 or even a 1000 transactions per second, there will be so much revenue generated that we could easily pay for both the development and deployment of the infrastructure to achieve 100,000 transactions per second without having to redesign the protocol in a fundamental way.
LTB: Okay so the protocol itself is capable of that, although the current technology is not. You used processors as an example. How good of an example is that? … There are lots of things that you could do that don’t require any sort of network connection or latency whatsoever because they’re happening locally on your device. There are lots of use cases where you wouldn’t have to have that element at all. But with bitshares, again, because it’s this common ledger, that use case doesn’t exist. There’s no reason why you want to run an offline copy unless you’re doing for cold storage… 27:54
DL: At scale, in the computer industry...the fastest single threaded performance CPU you can get, that is the limit on how many sequential order dependent operations you can pump through the system at a time. You can’t run a market where every bid or asks that’s filled affects the orders that are available to the next bid or ask. In a single market that’s all going to have to take place on a single thread on a single CPU somewhere. Even if you have many different computers all over the world running that same calculation, they all have to process the transactions. So one question is how do you aggregate all the transactions, group them, sequence them, and then execute them. The process of peer to peer networks, block production, all those things, that’s all the grouping, but at the end of the day all blockchains are limited by this. So we can scale up the network infrastructure, we can do parallel signature verification, have high bandwidth connections capable, but at a 100 bytes per transaction you’re looking a massive bandwidth requirements, not to mention the latency requirements in order to move that much data from one spot in the world to the other side of the world in a timely manner. From a practical perspective, we know we can do 1000 transactions per second without requiring excessive hardware because we’ve demonstrated it on our test networks in public. And we’ve scaled it back to 100 transactions per second just because we know we don’t have the demand for it. It’s more of a safety feature to prevent flooding attacks on the network. 29:35
LTB: You talked about the test net with the DigitalOcean instances. DigitalOcean is a cloud computing service, so were these actually distributed around… is there latency in this system? Or you got a bunch of instances...in co-located locations … on DigitalOcean. Is this a realistic comparison?
DL: I used DigitalOcean to describe the specs of the machine not necessarily the location.
LTB: Ahh.
DL: We’re talking about low end VPS systems but this testnet had systems from around the world involved in it. Some people had … home internet connections. Some people in China. We had a few on DigitalOcean. A wide variety of internet connections and hardware specs. 30:19
LTB: You talked about direct fiber optic connections between nodes that have been elected. One of the things that we’ve noticed over time with bitcoin is that as the infrastructure requirements get greater we actually see centralization because the barrier to becoming a part of the system that enables it in a way that a miner or one of your delegates might… the barrier simply becomes higher, it costs more money. And with something like a direct fiber optic connection, then again that’s a meaningful infrastructure investment that somebody would have to chose to make, so do you have any concerns about the system becoming less decentralized or less robust over time because of these optimizations? 31:00
DL: What we realized early on is that economic necessity forces centralization. At scale you have centralization. What we do is manage it so that you can get elected and have a secure source of income for producing blocks before you necessarily have to make the investment in the hardware. It means that it’s not just whoever is willing to make the investment and compete. They get it. Basically you buy your way into controlling bitcoin. You can’t buy your way into control of this network. You need both money and political support. That adds a lot of checks and balances. And it also means that once someone is an established player they’re not guaranteed to maintain that spot just because they have money. If they start misbehaving they could lose their position and be out their entire investment. Whereas a bitcoin miner can’t be fired. 31:55